Test Report: KVM_Linux_crio 19749

                    
                      50b5d8ee62174b462904730e907edeaa222f14db:2024-10-11:36607
                    
                

Test fail (31/319)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.1
37 TestAddons/parallel/MetricsServer 357.64
46 TestAddons/StoppedEnableDisable 154.36
165 TestMultiControlPlane/serial/StopSecondaryNode 141.39
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.59
167 TestMultiControlPlane/serial/RestartSecondaryNode 6.32
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.43
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 363.81
172 TestMultiControlPlane/serial/StopCluster 141.93
232 TestMultiNode/serial/RestartKeepsNodes 326.17
234 TestMultiNode/serial/StopMultiNode 145.06
241 TestPreload 176.31
249 TestKubernetesUpgrade 523.85
321 TestStartStop/group/old-k8s-version/serial/FirstStart 287.8
347 TestStartStop/group/no-preload/serial/Stop 139.08
349 TestStartStop/group/embed-certs/serial/Stop 138.94
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
353 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 107.41
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/old-k8s-version/serial/SecondStart 736.57
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.98
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.91
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.9
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.19
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 398.28
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 416.16
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 304.27
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 126.29
x
+
TestAddons/parallel/Ingress (152.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-335640 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-335640 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-335640 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3af527ef-278e-441a-a261-0483d6809c9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3af527ef-278e-441a-a261-0483d6809c9a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003994807s
I1011 21:03:18.373480   18814 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335640 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.052245782s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-335640 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-335640 -n addons-335640
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 logs -n 25: (1.238084398s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-873204                                                                     | download-only-873204 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-404031                                                                     | download-only-404031 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-873204                                                                     | download-only-873204 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-999700 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | binary-mirror-999700                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33833                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-999700                                                                     | binary-mirror-999700 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| addons  | enable dashboard -p                                                                         | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-335640                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-335640                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-335640 --wait=true                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 21:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | -p addons-335640                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-335640 ip                                                                            | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-335640 ssh cat                                                                       | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | /opt/local-path-provisioner/pvc-5e03d062-901b-4d87-ab60-2b2a39b9acde_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-335640 ssh curl -s                                                                   | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-335640 ip                                                                            | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:05 UTC | 11 Oct 24 21:05 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:45.095805   19546 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:45.095917   19546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:45.095925   19546 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:45.095928   19546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:45.096096   19546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 20:58:45.096652   19546 out.go:352] Setting JSON to false
	I1011 20:58:45.097400   19546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2470,"bootTime":1728677855,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 20:58:45.097493   19546 start.go:139] virtualization: kvm guest
	I1011 20:58:45.099538   19546 out.go:177] * [addons-335640] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 20:58:45.100872   19546 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 20:58:45.100898   19546 notify.go:220] Checking for updates...
	I1011 20:58:45.103001   19546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:45.104033   19546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 20:58:45.104984   19546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:45.105950   19546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 20:58:45.106936   19546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 20:58:45.108109   19546 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:45.138356   19546 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 20:58:45.139595   19546 start.go:297] selected driver: kvm2
	I1011 20:58:45.139608   19546 start.go:901] validating driver "kvm2" against <nil>
	I1011 20:58:45.139618   19546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 20:58:45.140244   19546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:45.140318   19546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 20:58:45.154523   19546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 20:58:45.154568   19546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:45.154799   19546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:58:45.154828   19546 cni.go:84] Creating CNI manager for ""
	I1011 20:58:45.154869   19546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:58:45.154876   19546 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:45.154921   19546 start.go:340] cluster config:
	{Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:45.155002   19546 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:45.156547   19546 out.go:177] * Starting "addons-335640" primary control-plane node in "addons-335640" cluster
	I1011 20:58:45.157626   19546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:45.157659   19546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 20:58:45.157669   19546 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:45.157748   19546 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 20:58:45.157759   19546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 20:58:45.158043   19546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/config.json ...
	I1011 20:58:45.158061   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/config.json: {Name:mkcc4401e0bfd13d7ad41ac79776709e9b972584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:45.158192   19546 start.go:360] acquireMachinesLock for addons-335640: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 20:58:45.158253   19546 start.go:364] duration metric: took 44.382µs to acquireMachinesLock for "addons-335640"
	I1011 20:58:45.158274   19546 start.go:93] Provisioning new machine with config: &{Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:58:45.158344   19546 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 20:58:45.159908   19546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1011 20:58:45.160045   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:58:45.160077   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:58:45.173387   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37491
	I1011 20:58:45.173826   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:58:45.174334   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:58:45.174352   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:58:45.174721   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:58:45.174887   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:58:45.175039   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:58:45.175176   19546 start.go:159] libmachine.API.Create for "addons-335640" (driver="kvm2")
	I1011 20:58:45.175205   19546 client.go:168] LocalClient.Create starting
	I1011 20:58:45.175244   19546 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 20:58:45.411712   19546 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 20:58:45.650822   19546 main.go:141] libmachine: Running pre-create checks...
	I1011 20:58:45.650847   19546 main.go:141] libmachine: (addons-335640) Calling .PreCreateCheck
	I1011 20:58:45.651347   19546 main.go:141] libmachine: (addons-335640) Calling .GetConfigRaw
	I1011 20:58:45.651829   19546 main.go:141] libmachine: Creating machine...
	I1011 20:58:45.651851   19546 main.go:141] libmachine: (addons-335640) Calling .Create
	I1011 20:58:45.652070   19546 main.go:141] libmachine: (addons-335640) Creating KVM machine...
	I1011 20:58:45.653549   19546 main.go:141] libmachine: (addons-335640) DBG | found existing default KVM network
	I1011 20:58:45.654309   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:45.654156   19568 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011b1f0}
	I1011 20:58:45.654332   19546 main.go:141] libmachine: (addons-335640) DBG | created network xml: 
	I1011 20:58:45.654344   19546 main.go:141] libmachine: (addons-335640) DBG | <network>
	I1011 20:58:45.654358   19546 main.go:141] libmachine: (addons-335640) DBG |   <name>mk-addons-335640</name>
	I1011 20:58:45.654412   19546 main.go:141] libmachine: (addons-335640) DBG |   <dns enable='no'/>
	I1011 20:58:45.654444   19546 main.go:141] libmachine: (addons-335640) DBG |   
	I1011 20:58:45.654456   19546 main.go:141] libmachine: (addons-335640) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1011 20:58:45.654468   19546 main.go:141] libmachine: (addons-335640) DBG |     <dhcp>
	I1011 20:58:45.654478   19546 main.go:141] libmachine: (addons-335640) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1011 20:58:45.654486   19546 main.go:141] libmachine: (addons-335640) DBG |     </dhcp>
	I1011 20:58:45.654496   19546 main.go:141] libmachine: (addons-335640) DBG |   </ip>
	I1011 20:58:45.654502   19546 main.go:141] libmachine: (addons-335640) DBG |   
	I1011 20:58:45.654514   19546 main.go:141] libmachine: (addons-335640) DBG | </network>
	I1011 20:58:45.654524   19546 main.go:141] libmachine: (addons-335640) DBG | 
	I1011 20:58:45.659531   19546 main.go:141] libmachine: (addons-335640) DBG | trying to create private KVM network mk-addons-335640 192.168.39.0/24...
	I1011 20:58:45.723458   19546 main.go:141] libmachine: (addons-335640) DBG | private KVM network mk-addons-335640 192.168.39.0/24 created
	I1011 20:58:45.723521   19546 main.go:141] libmachine: (addons-335640) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640 ...
	I1011 20:58:45.723555   19546 main.go:141] libmachine: (addons-335640) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 20:58:45.723572   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:45.723443   19568 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:45.723604   19546 main.go:141] libmachine: (addons-335640) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 20:58:45.992379   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:45.992252   19568 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa...
	I1011 20:58:46.322463   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:46.322359   19568 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/addons-335640.rawdisk...
	I1011 20:58:46.322501   19546 main.go:141] libmachine: (addons-335640) DBG | Writing magic tar header
	I1011 20:58:46.322522   19546 main.go:141] libmachine: (addons-335640) DBG | Writing SSH key tar header
	I1011 20:58:46.322540   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:46.322454   19568 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640 ...
	I1011 20:58:46.322561   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640
	I1011 20:58:46.322574   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 20:58:46.322581   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640 (perms=drwx------)
	I1011 20:58:46.322593   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 20:58:46.322603   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 20:58:46.322628   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 20:58:46.322645   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 20:58:46.322653   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:46.322665   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 20:58:46.322681   19546 main.go:141] libmachine: (addons-335640) Creating domain...
	I1011 20:58:46.322694   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 20:58:46.322709   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 20:58:46.322720   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins
	I1011 20:58:46.322733   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home
	I1011 20:58:46.322744   19546 main.go:141] libmachine: (addons-335640) DBG | Skipping /home - not owner
	I1011 20:58:46.323645   19546 main.go:141] libmachine: (addons-335640) define libvirt domain using xml: 
	I1011 20:58:46.323672   19546 main.go:141] libmachine: (addons-335640) <domain type='kvm'>
	I1011 20:58:46.323689   19546 main.go:141] libmachine: (addons-335640)   <name>addons-335640</name>
	I1011 20:58:46.323716   19546 main.go:141] libmachine: (addons-335640)   <memory unit='MiB'>4000</memory>
	I1011 20:58:46.323722   19546 main.go:141] libmachine: (addons-335640)   <vcpu>2</vcpu>
	I1011 20:58:46.323729   19546 main.go:141] libmachine: (addons-335640)   <features>
	I1011 20:58:46.323735   19546 main.go:141] libmachine: (addons-335640)     <acpi/>
	I1011 20:58:46.323739   19546 main.go:141] libmachine: (addons-335640)     <apic/>
	I1011 20:58:46.323744   19546 main.go:141] libmachine: (addons-335640)     <pae/>
	I1011 20:58:46.323748   19546 main.go:141] libmachine: (addons-335640)     
	I1011 20:58:46.323777   19546 main.go:141] libmachine: (addons-335640)   </features>
	I1011 20:58:46.323796   19546 main.go:141] libmachine: (addons-335640)   <cpu mode='host-passthrough'>
	I1011 20:58:46.323802   19546 main.go:141] libmachine: (addons-335640)   
	I1011 20:58:46.323809   19546 main.go:141] libmachine: (addons-335640)   </cpu>
	I1011 20:58:46.323815   19546 main.go:141] libmachine: (addons-335640)   <os>
	I1011 20:58:46.323819   19546 main.go:141] libmachine: (addons-335640)     <type>hvm</type>
	I1011 20:58:46.323830   19546 main.go:141] libmachine: (addons-335640)     <boot dev='cdrom'/>
	I1011 20:58:46.323841   19546 main.go:141] libmachine: (addons-335640)     <boot dev='hd'/>
	I1011 20:58:46.323855   19546 main.go:141] libmachine: (addons-335640)     <bootmenu enable='no'/>
	I1011 20:58:46.323861   19546 main.go:141] libmachine: (addons-335640)   </os>
	I1011 20:58:46.323868   19546 main.go:141] libmachine: (addons-335640)   <devices>
	I1011 20:58:46.323874   19546 main.go:141] libmachine: (addons-335640)     <disk type='file' device='cdrom'>
	I1011 20:58:46.323881   19546 main.go:141] libmachine: (addons-335640)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/boot2docker.iso'/>
	I1011 20:58:46.323886   19546 main.go:141] libmachine: (addons-335640)       <target dev='hdc' bus='scsi'/>
	I1011 20:58:46.323890   19546 main.go:141] libmachine: (addons-335640)       <readonly/>
	I1011 20:58:46.323895   19546 main.go:141] libmachine: (addons-335640)     </disk>
	I1011 20:58:46.323903   19546 main.go:141] libmachine: (addons-335640)     <disk type='file' device='disk'>
	I1011 20:58:46.323912   19546 main.go:141] libmachine: (addons-335640)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 20:58:46.323928   19546 main.go:141] libmachine: (addons-335640)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/addons-335640.rawdisk'/>
	I1011 20:58:46.323942   19546 main.go:141] libmachine: (addons-335640)       <target dev='hda' bus='virtio'/>
	I1011 20:58:46.323953   19546 main.go:141] libmachine: (addons-335640)     </disk>
	I1011 20:58:46.323960   19546 main.go:141] libmachine: (addons-335640)     <interface type='network'>
	I1011 20:58:46.323972   19546 main.go:141] libmachine: (addons-335640)       <source network='mk-addons-335640'/>
	I1011 20:58:46.323979   19546 main.go:141] libmachine: (addons-335640)       <model type='virtio'/>
	I1011 20:58:46.323986   19546 main.go:141] libmachine: (addons-335640)     </interface>
	I1011 20:58:46.323990   19546 main.go:141] libmachine: (addons-335640)     <interface type='network'>
	I1011 20:58:46.323998   19546 main.go:141] libmachine: (addons-335640)       <source network='default'/>
	I1011 20:58:46.324002   19546 main.go:141] libmachine: (addons-335640)       <model type='virtio'/>
	I1011 20:58:46.324011   19546 main.go:141] libmachine: (addons-335640)     </interface>
	I1011 20:58:46.324023   19546 main.go:141] libmachine: (addons-335640)     <serial type='pty'>
	I1011 20:58:46.324031   19546 main.go:141] libmachine: (addons-335640)       <target port='0'/>
	I1011 20:58:46.324043   19546 main.go:141] libmachine: (addons-335640)     </serial>
	I1011 20:58:46.324052   19546 main.go:141] libmachine: (addons-335640)     <console type='pty'>
	I1011 20:58:46.324067   19546 main.go:141] libmachine: (addons-335640)       <target type='serial' port='0'/>
	I1011 20:58:46.324080   19546 main.go:141] libmachine: (addons-335640)     </console>
	I1011 20:58:46.324088   19546 main.go:141] libmachine: (addons-335640)     <rng model='virtio'>
	I1011 20:58:46.324093   19546 main.go:141] libmachine: (addons-335640)       <backend model='random'>/dev/random</backend>
	I1011 20:58:46.324100   19546 main.go:141] libmachine: (addons-335640)     </rng>
	I1011 20:58:46.324103   19546 main.go:141] libmachine: (addons-335640)     
	I1011 20:58:46.324110   19546 main.go:141] libmachine: (addons-335640)     
	I1011 20:58:46.324114   19546 main.go:141] libmachine: (addons-335640)   </devices>
	I1011 20:58:46.324118   19546 main.go:141] libmachine: (addons-335640) </domain>
	I1011 20:58:46.324124   19546 main.go:141] libmachine: (addons-335640) 
	I1011 20:58:46.382034   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:1f:29:89 in network default
	I1011 20:58:46.382570   19546 main.go:141] libmachine: (addons-335640) Ensuring networks are active...
	I1011 20:58:46.382590   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:46.383273   19546 main.go:141] libmachine: (addons-335640) Ensuring network default is active
	I1011 20:58:46.383559   19546 main.go:141] libmachine: (addons-335640) Ensuring network mk-addons-335640 is active
	I1011 20:58:46.383992   19546 main.go:141] libmachine: (addons-335640) Getting domain xml...
	I1011 20:58:46.384580   19546 main.go:141] libmachine: (addons-335640) Creating domain...
	I1011 20:58:47.927204   19546 main.go:141] libmachine: (addons-335640) Waiting to get IP...
	I1011 20:58:47.928068   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:47.928549   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:47.928578   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:47.928535   19568 retry.go:31] will retry after 254.276274ms: waiting for machine to come up
	I1011 20:58:48.184671   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:48.185021   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:48.185048   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:48.184978   19568 retry.go:31] will retry after 249.718028ms: waiting for machine to come up
	I1011 20:58:48.436506   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:48.436904   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:48.436932   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:48.436858   19568 retry.go:31] will retry after 468.619344ms: waiting for machine to come up
	I1011 20:58:48.907487   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:48.907879   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:48.907908   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:48.907844   19568 retry.go:31] will retry after 547.218559ms: waiting for machine to come up
	I1011 20:58:49.456565   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:49.457038   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:49.457059   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:49.456997   19568 retry.go:31] will retry after 666.004256ms: waiting for machine to come up
	I1011 20:58:50.124650   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:50.125033   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:50.125053   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:50.124995   19568 retry.go:31] will retry after 844.774679ms: waiting for machine to come up
	I1011 20:58:50.971169   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:50.971566   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:50.971586   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:50.971530   19568 retry.go:31] will retry after 772.181307ms: waiting for machine to come up
	I1011 20:58:51.745330   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:51.745746   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:51.745772   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:51.745704   19568 retry.go:31] will retry after 1.038747096s: waiting for machine to come up
	I1011 20:58:52.785748   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:52.786175   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:52.786211   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:52.786142   19568 retry.go:31] will retry after 1.304891238s: waiting for machine to come up
	I1011 20:58:54.092429   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:54.092819   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:54.092845   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:54.092778   19568 retry.go:31] will retry after 1.637422366s: waiting for machine to come up
	I1011 20:58:55.731521   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:55.731925   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:55.731948   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:55.731891   19568 retry.go:31] will retry after 2.869520339s: waiting for machine to come up
	I1011 20:58:58.605028   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:58.605487   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:58.605508   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:58.605454   19568 retry.go:31] will retry after 3.228381586s: waiting for machine to come up
	I1011 20:59:01.836051   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:01.836450   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:59:01.836471   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:59:01.836402   19568 retry.go:31] will retry after 3.104216969s: waiting for machine to come up
	I1011 20:59:04.944517   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:04.944993   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:59:04.945017   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:59:04.944941   19568 retry.go:31] will retry after 4.185077738s: waiting for machine to come up
	I1011 20:59:09.134077   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.134501   19546 main.go:141] libmachine: (addons-335640) Found IP for machine: 192.168.39.109
	I1011 20:59:09.134524   19546 main.go:141] libmachine: (addons-335640) Reserving static IP address...
	I1011 20:59:09.134536   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has current primary IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.134940   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find host DHCP lease matching {name: "addons-335640", mac: "52:54:00:8b:e5:d7", ip: "192.168.39.109"} in network mk-addons-335640
	I1011 20:59:09.201559   19546 main.go:141] libmachine: (addons-335640) DBG | Getting to WaitForSSH function...
	I1011 20:59:09.201589   19546 main.go:141] libmachine: (addons-335640) Reserved static IP address: 192.168.39.109
	I1011 20:59:09.201602   19546 main.go:141] libmachine: (addons-335640) Waiting for SSH to be available...
	I1011 20:59:09.204242   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.204691   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.204718   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.204885   19546 main.go:141] libmachine: (addons-335640) DBG | Using SSH client type: external
	I1011 20:59:09.204896   19546 main.go:141] libmachine: (addons-335640) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa (-rw-------)
	I1011 20:59:09.204916   19546 main.go:141] libmachine: (addons-335640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 20:59:09.204924   19546 main.go:141] libmachine: (addons-335640) DBG | About to run SSH command:
	I1011 20:59:09.204931   19546 main.go:141] libmachine: (addons-335640) DBG | exit 0
	I1011 20:59:09.338941   19546 main.go:141] libmachine: (addons-335640) DBG | SSH cmd err, output: <nil>: 
	I1011 20:59:09.339300   19546 main.go:141] libmachine: (addons-335640) KVM machine creation complete!
	I1011 20:59:09.339654   19546 main.go:141] libmachine: (addons-335640) Calling .GetConfigRaw
	I1011 20:59:09.340181   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:09.340434   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:09.340624   19546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 20:59:09.340646   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:09.341805   19546 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 20:59:09.341820   19546 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 20:59:09.341825   19546 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 20:59:09.341830   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.343973   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.344310   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.344339   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.344464   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.344603   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.344724   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.344806   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.344906   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.345082   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.345093   19546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 20:59:09.453911   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:59:09.453930   19546 main.go:141] libmachine: Detecting the provisioner...
	I1011 20:59:09.453938   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.456692   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.457185   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.457226   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.457437   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.457668   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.457850   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.457971   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.458136   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.458308   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.458321   19546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 20:59:09.567372   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 20:59:09.567432   19546 main.go:141] libmachine: found compatible host: buildroot
	I1011 20:59:09.567438   19546 main.go:141] libmachine: Provisioning with buildroot...
	I1011 20:59:09.567451   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:59:09.567692   19546 buildroot.go:166] provisioning hostname "addons-335640"
	I1011 20:59:09.567717   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:59:09.567890   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.570834   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.571151   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.571176   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.571283   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.571470   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.571658   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.571816   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.571983   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.572236   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.572253   19546 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-335640 && echo "addons-335640" | sudo tee /etc/hostname
	I1011 20:59:09.697442   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-335640
	
	I1011 20:59:09.697472   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.700221   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.700588   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.700625   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.700777   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.700958   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.701092   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.701194   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.701320   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.701526   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.701550   19546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-335640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-335640/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-335640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 20:59:09.819460   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:59:09.819495   19546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 20:59:09.819545   19546 buildroot.go:174] setting up certificates
	I1011 20:59:09.819563   19546 provision.go:84] configureAuth start
	I1011 20:59:09.819582   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:59:09.819854   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:09.822188   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.822458   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.822482   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.822593   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.824937   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.825235   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.825262   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.825372   19546 provision.go:143] copyHostCerts
	I1011 20:59:09.825461   19546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 20:59:09.825660   19546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 20:59:09.825762   19546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 20:59:09.825841   19546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.addons-335640 san=[127.0.0.1 192.168.39.109 addons-335640 localhost minikube]
	I1011 20:59:10.017292   19546 provision.go:177] copyRemoteCerts
	I1011 20:59:10.017349   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 20:59:10.017371   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.019883   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.020386   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.020424   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.020594   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.020750   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.020860   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.020969   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.106005   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 20:59:10.131409   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 20:59:10.154440   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 20:59:10.177247   19546 provision.go:87] duration metric: took 357.667235ms to configureAuth
	I1011 20:59:10.177274   19546 buildroot.go:189] setting minikube options for container-runtime
	I1011 20:59:10.177447   19546 config.go:182] Loaded profile config "addons-335640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:59:10.177516   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.180373   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.180727   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.180759   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.180941   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.181128   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.181286   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.181407   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.181578   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:10.181775   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:10.181795   19546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 20:59:10.401715   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 20:59:10.401739   19546 main.go:141] libmachine: Checking connection to Docker...
	I1011 20:59:10.401748   19546 main.go:141] libmachine: (addons-335640) Calling .GetURL
	I1011 20:59:10.403011   19546 main.go:141] libmachine: (addons-335640) DBG | Using libvirt version 6000000
	I1011 20:59:10.405132   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.405390   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.405413   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.405544   19546 main.go:141] libmachine: Docker is up and running!
	I1011 20:59:10.405557   19546 main.go:141] libmachine: Reticulating splines...
	I1011 20:59:10.405565   19546 client.go:171] duration metric: took 25.230349012s to LocalClient.Create
	I1011 20:59:10.405592   19546 start.go:167] duration metric: took 25.230416192s to libmachine.API.Create "addons-335640"
	I1011 20:59:10.405605   19546 start.go:293] postStartSetup for "addons-335640" (driver="kvm2")
	I1011 20:59:10.405624   19546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 20:59:10.405647   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.405883   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 20:59:10.405911   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.407980   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.408276   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.408302   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.408391   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.408569   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.408709   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.408856   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.492466   19546 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 20:59:10.496604   19546 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 20:59:10.496631   19546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 20:59:10.496698   19546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 20:59:10.496721   19546 start.go:296] duration metric: took 91.104646ms for postStartSetup
	I1011 20:59:10.496748   19546 main.go:141] libmachine: (addons-335640) Calling .GetConfigRaw
	I1011 20:59:10.497246   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:10.499792   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.500125   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.500153   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.500384   19546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/config.json ...
	I1011 20:59:10.500580   19546 start.go:128] duration metric: took 25.342225257s to createHost
	I1011 20:59:10.500603   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.502965   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.503275   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.503295   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.503439   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.503618   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.503806   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.503941   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.504097   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:10.504247   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:10.504257   19546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 20:59:10.615108   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728680350.590549237
	
	I1011 20:59:10.615131   19546 fix.go:216] guest clock: 1728680350.590549237
	I1011 20:59:10.615140   19546 fix.go:229] Guest: 2024-10-11 20:59:10.590549237 +0000 UTC Remote: 2024-10-11 20:59:10.500593928 +0000 UTC m=+25.440663918 (delta=89.955309ms)
	I1011 20:59:10.615164   19546 fix.go:200] guest clock delta is within tolerance: 89.955309ms
	I1011 20:59:10.615171   19546 start.go:83] releasing machines lock for "addons-335640", held for 25.456906139s
	I1011 20:59:10.615211   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.615455   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:10.617866   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.618186   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.618211   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.618359   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.618786   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.618947   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.619036   19546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 20:59:10.619085   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.619121   19546 ssh_runner.go:195] Run: cat /version.json
	I1011 20:59:10.619139   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.621546   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.621725   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.621966   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.621990   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.622066   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.622091   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.622104   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.622288   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.622290   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.622482   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.622491   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.622609   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.622641   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.622748   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.721293   19546 ssh_runner.go:195] Run: systemctl --version
	I1011 20:59:10.726845   19546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 20:59:10.880943   19546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 20:59:10.887396   19546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 20:59:10.887452   19546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:59:10.903262   19546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 20:59:10.903284   19546 start.go:495] detecting cgroup driver to use...
	I1011 20:59:10.903341   19546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 20:59:10.919240   19546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 20:59:10.932570   19546 docker.go:217] disabling cri-docker service (if available) ...
	I1011 20:59:10.932611   19546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 20:59:10.945530   19546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 20:59:10.958778   19546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 20:59:11.070368   19546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 20:59:11.209447   19546 docker.go:233] disabling docker service ...
	I1011 20:59:11.209531   19546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 20:59:11.227976   19546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 20:59:11.240967   19546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 20:59:11.369226   19546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 20:59:11.478432   19546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 20:59:11.492048   19546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 20:59:11.510159   19546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 20:59:11.510221   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.519862   19546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 20:59:11.519918   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.529783   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.539335   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.549111   19546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 20:59:11.558765   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.568749   19546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.585810   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.595932   19546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 20:59:11.605399   19546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 20:59:11.605436   19546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 20:59:11.617207   19546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 20:59:11.626271   19546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:59:11.729923   19546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 20:59:11.815515   19546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 20:59:11.815619   19546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 20:59:11.820243   19546 start.go:563] Will wait 60s for crictl version
	I1011 20:59:11.820303   19546 ssh_runner.go:195] Run: which crictl
	I1011 20:59:11.823957   19546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 20:59:11.859903   19546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 20:59:11.860025   19546 ssh_runner.go:195] Run: crio --version
	I1011 20:59:11.886169   19546 ssh_runner.go:195] Run: crio --version
	I1011 20:59:11.920120   19546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 20:59:11.921440   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:11.924313   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:11.924611   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:11.924641   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:11.924852   19546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 20:59:11.929004   19546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:59:11.943020   19546 kubeadm.go:883] updating cluster {Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 20:59:11.943108   19546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:59:11.943147   19546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:59:11.977761   19546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 20:59:11.977817   19546 ssh_runner.go:195] Run: which lz4
	I1011 20:59:11.981746   19546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 20:59:11.985848   19546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 20:59:11.985876   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 20:59:13.241545   19546 crio.go:462] duration metric: took 1.259823876s to copy over tarball
	I1011 20:59:13.241631   19546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 20:59:15.322988   19546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.081328818s)
	I1011 20:59:15.323014   19546 crio.go:469] duration metric: took 2.081436779s to extract the tarball
	I1011 20:59:15.323020   19546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 20:59:15.359316   19546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:59:15.398629   19546 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 20:59:15.398657   19546 cache_images.go:84] Images are preloaded, skipping loading
	I1011 20:59:15.398668   19546 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.31.1 crio true true} ...
	I1011 20:59:15.398762   19546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-335640 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 20:59:15.398825   19546 ssh_runner.go:195] Run: crio config
	I1011 20:59:15.440683   19546 cni.go:84] Creating CNI manager for ""
	I1011 20:59:15.440704   19546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:59:15.440715   19546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 20:59:15.440736   19546 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-335640 NodeName:addons-335640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 20:59:15.440889   19546 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-335640"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 20:59:15.440951   19546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 20:59:15.451534   19546 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 20:59:15.451588   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 20:59:15.461511   19546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1011 20:59:15.479080   19546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 20:59:15.494746   19546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1011 20:59:15.510069   19546 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I1011 20:59:15.513532   19546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:59:15.524827   19546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:59:15.640530   19546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:59:15.656632   19546 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640 for IP: 192.168.39.109
	I1011 20:59:15.656656   19546 certs.go:194] generating shared ca certs ...
	I1011 20:59:15.656675   19546 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.656833   19546 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 20:59:15.750119   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt ...
	I1011 20:59:15.750145   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt: {Name:mk59e4c1dd20a57ddfdecdead44a6c371bcde09f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.750305   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key ...
	I1011 20:59:15.750315   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key: {Name:mkd5a8efca580bc196234d3996e36d59c7b10106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.750378   19546 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 20:59:15.899980   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt ...
	I1011 20:59:15.900005   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt: {Name:mkaf5d3d9a411319b7249c0cf53803531482c9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.900145   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key ...
	I1011 20:59:15.900154   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key: {Name:mkcec5e8de07126d8bd86589cc4b12e25aacbb98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.900219   19546 certs.go:256] generating profile certs ...
	I1011 20:59:15.900281   19546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.key
	I1011 20:59:15.900295   19546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt with IP's: []
	I1011 20:59:16.242464   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt ...
	I1011 20:59:16.242496   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: {Name:mkb1894ea5e6639a50eda6724b826de9b1c4351f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.242695   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.key ...
	I1011 20:59:16.242711   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.key: {Name:mkad705e8f90bda02c5fdd44b787aef0c0e96380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.242816   19546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21
	I1011 20:59:16.242841   19546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109]
	I1011 20:59:16.391701   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21 ...
	I1011 20:59:16.391731   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21: {Name:mk996a223a3c5b4f3388013a4020ebd8365a247d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.391906   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21 ...
	I1011 20:59:16.391922   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21: {Name:mkfa1ce2ca00815018180f9fccdbfe365ed06a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.392014   19546 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt
	I1011 20:59:16.392107   19546 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key
	I1011 20:59:16.392184   19546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key
	I1011 20:59:16.392221   19546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt with IP's: []
	I1011 20:59:16.548763   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt ...
	I1011 20:59:16.548793   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt: {Name:mk072dae23020d365c8024519557199cd3978574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.548969   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key ...
	I1011 20:59:16.548983   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key: {Name:mk957d5f5556f37c9c09f52acb478d5bd144d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.549173   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 20:59:16.549222   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 20:59:16.549261   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 20:59:16.549295   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 20:59:16.549879   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 20:59:16.576634   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 20:59:16.598104   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 20:59:16.630239   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 20:59:16.652867   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1011 20:59:16.675176   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 20:59:16.697252   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 20:59:16.719557   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 20:59:16.741882   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 20:59:16.763713   19546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 20:59:16.778887   19546 ssh_runner.go:195] Run: openssl version
	I1011 20:59:16.784328   19546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 20:59:16.793997   19546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:59:16.798109   19546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:59:16.798159   19546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:59:16.803654   19546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 20:59:16.813332   19546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 20:59:16.817034   19546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 20:59:16.817076   19546 kubeadm.go:392] StartCluster: {Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:59:16.817201   19546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 20:59:16.817231   19546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 20:59:16.854629   19546 cri.go:89] found id: ""
	I1011 20:59:16.854686   19546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 20:59:16.863627   19546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 20:59:16.875687   19546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 20:59:16.886486   19546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 20:59:16.886503   19546 kubeadm.go:157] found existing configuration files:
	
	I1011 20:59:16.886538   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 20:59:16.895457   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 20:59:16.895514   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 20:59:16.904355   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 20:59:16.914064   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 20:59:16.914128   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 20:59:16.923397   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 20:59:16.932185   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 20:59:16.932244   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 20:59:16.940959   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 20:59:16.949445   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 20:59:16.949485   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 20:59:16.958007   19546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 20:59:17.007883   19546 kubeadm.go:310] W1011 20:59:16.990457     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:59:17.009109   19546 kubeadm.go:310] W1011 20:59:16.991989     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:59:17.120320   19546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 20:59:27.374404   19546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 20:59:27.374474   19546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 20:59:27.374553   19546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 20:59:27.374680   19546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 20:59:27.374818   19546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 20:59:27.374875   19546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 20:59:27.376394   19546 out.go:235]   - Generating certificates and keys ...
	I1011 20:59:27.376476   19546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 20:59:27.376530   19546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 20:59:27.376589   19546 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 20:59:27.376639   19546 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 20:59:27.376691   19546 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 20:59:27.376733   19546 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 20:59:27.376778   19546 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 20:59:27.376897   19546 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-335640 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I1011 20:59:27.376944   19546 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 20:59:27.377044   19546 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-335640 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I1011 20:59:27.377143   19546 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 20:59:27.377257   19546 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 20:59:27.377329   19546 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 20:59:27.377403   19546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 20:59:27.377488   19546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 20:59:27.377583   19546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 20:59:27.377664   19546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 20:59:27.377742   19546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 20:59:27.377814   19546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 20:59:27.377915   19546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 20:59:27.378005   19546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 20:59:27.379746   19546 out.go:235]   - Booting up control plane ...
	I1011 20:59:27.379834   19546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 20:59:27.379916   19546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 20:59:27.380005   19546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 20:59:27.380149   19546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 20:59:27.380274   19546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 20:59:27.380332   19546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 20:59:27.380484   19546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 20:59:27.380573   19546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 20:59:27.380625   19546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002127902s
	I1011 20:59:27.380684   19546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 20:59:27.380732   19546 kubeadm.go:310] [api-check] The API server is healthy after 5.001807492s
	I1011 20:59:27.380836   19546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 20:59:27.380978   19546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 20:59:27.381043   19546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 20:59:27.381228   19546 kubeadm.go:310] [mark-control-plane] Marking the node addons-335640 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 20:59:27.381273   19546 kubeadm.go:310] [bootstrap-token] Using token: fr560h.qqb2i4guniq1cfyk
	I1011 20:59:27.382828   19546 out.go:235]   - Configuring RBAC rules ...
	I1011 20:59:27.382946   19546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 20:59:27.383035   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 20:59:27.383198   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 20:59:27.383353   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 20:59:27.383477   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 20:59:27.383562   19546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 20:59:27.383691   19546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 20:59:27.383761   19546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 20:59:27.383809   19546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 20:59:27.383815   19546 kubeadm.go:310] 
	I1011 20:59:27.383860   19546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 20:59:27.383869   19546 kubeadm.go:310] 
	I1011 20:59:27.383935   19546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 20:59:27.383944   19546 kubeadm.go:310] 
	I1011 20:59:27.383968   19546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 20:59:27.384017   19546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 20:59:27.384057   19546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 20:59:27.384062   19546 kubeadm.go:310] 
	I1011 20:59:27.384109   19546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 20:59:27.384114   19546 kubeadm.go:310] 
	I1011 20:59:27.384149   19546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 20:59:27.384155   19546 kubeadm.go:310] 
	I1011 20:59:27.384200   19546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 20:59:27.384276   19546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 20:59:27.384364   19546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 20:59:27.384380   19546 kubeadm.go:310] 
	I1011 20:59:27.384482   19546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 20:59:27.384587   19546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 20:59:27.384597   19546 kubeadm.go:310] 
	I1011 20:59:27.384697   19546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fr560h.qqb2i4guniq1cfyk \
	I1011 20:59:27.384807   19546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 20:59:27.384828   19546 kubeadm.go:310] 	--control-plane 
	I1011 20:59:27.384832   19546 kubeadm.go:310] 
	I1011 20:59:27.384900   19546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 20:59:27.384906   19546 kubeadm.go:310] 
	I1011 20:59:27.384972   19546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fr560h.qqb2i4guniq1cfyk \
	I1011 20:59:27.385068   19546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 20:59:27.385078   19546 cni.go:84] Creating CNI manager for ""
	I1011 20:59:27.385084   19546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:59:27.386640   19546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 20:59:27.387711   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 20:59:27.398468   19546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 20:59:27.418036   19546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 20:59:27.418131   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:27.418136   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-335640 minikube.k8s.io/updated_at=2024_10_11T20_59_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=addons-335640 minikube.k8s.io/primary=true
	I1011 20:59:27.455258   19546 ops.go:34] apiserver oom_adj: -16
	I1011 20:59:27.563867   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:28.064669   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:28.563956   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:29.064719   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:29.564686   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:30.064046   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:30.564700   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:31.064701   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:31.564540   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:31.688311   19546 kubeadm.go:1113] duration metric: took 4.270258923s to wait for elevateKubeSystemPrivileges
	I1011 20:59:31.688354   19546 kubeadm.go:394] duration metric: took 14.871281082s to StartCluster
	I1011 20:59:31.688377   19546 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:31.688512   19546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 20:59:31.688967   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:31.689144   19546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 20:59:31.689154   19546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:59:31.689214   19546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1011 20:59:31.689330   19546 addons.go:69] Setting yakd=true in profile "addons-335640"
	I1011 20:59:31.689345   19546 addons.go:69] Setting inspektor-gadget=true in profile "addons-335640"
	I1011 20:59:31.689356   19546 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-335640"
	I1011 20:59:31.689360   19546 config.go:182] Loaded profile config "addons-335640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:59:31.689370   19546 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-335640"
	I1011 20:59:31.689373   19546 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-335640"
	I1011 20:59:31.689363   19546 addons.go:69] Setting storage-provisioner=true in profile "addons-335640"
	I1011 20:59:31.689392   19546 addons.go:69] Setting cloud-spanner=true in profile "addons-335640"
	I1011 20:59:31.689364   19546 addons.go:234] Setting addon inspektor-gadget=true in "addons-335640"
	I1011 20:59:31.689402   19546 addons.go:234] Setting addon cloud-spanner=true in "addons-335640"
	I1011 20:59:31.689391   19546 addons.go:69] Setting volcano=true in profile "addons-335640"
	I1011 20:59:31.689411   19546 addons.go:69] Setting metrics-server=true in profile "addons-335640"
	I1011 20:59:31.689419   19546 addons.go:234] Setting addon volcano=true in "addons-335640"
	I1011 20:59:31.689421   19546 addons.go:234] Setting addon metrics-server=true in "addons-335640"
	I1011 20:59:31.689428   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689424   19546 addons.go:69] Setting gcp-auth=true in profile "addons-335640"
	I1011 20:59:31.689447   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689447   19546 addons.go:69] Setting ingress-dns=true in profile "addons-335640"
	I1011 20:59:31.689458   19546 addons.go:234] Setting addon ingress-dns=true in "addons-335640"
	I1011 20:59:31.689461   19546 mustload.go:65] Loading cluster: addons-335640
	I1011 20:59:31.689467   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689496   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689656   19546 config.go:182] Loaded profile config "addons-335640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:59:31.689348   19546 addons.go:234] Setting addon yakd=true in "addons-335640"
	I1011 20:59:31.689770   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689848   19546 addons.go:69] Setting volumesnapshots=true in profile "addons-335640"
	I1011 20:59:31.689857   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689861   19546 addons.go:234] Setting addon volumesnapshots=true in "addons-335640"
	I1011 20:59:31.689863   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689878   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689882   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689895   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689926   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689947   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689946   19546 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-335640"
	I1011 20:59:31.689959   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689979   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689981   19546 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-335640"
	I1011 20:59:31.689983   19546 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-335640"
	I1011 20:59:31.689994   19546 addons.go:69] Setting registry=true in profile "addons-335640"
	I1011 20:59:31.689997   19546 addons.go:69] Setting default-storageclass=true in profile "addons-335640"
	I1011 20:59:31.690005   19546 addons.go:234] Setting addon registry=true in "addons-335640"
	I1011 20:59:31.690010   19546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-335640"
	I1011 20:59:31.690023   19546 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-335640"
	I1011 20:59:31.689430   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690105   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690138   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690172   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690205   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690240   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689437   19546 addons.go:69] Setting ingress=true in profile "addons-335640"
	I1011 20:59:31.690379   19546 addons.go:234] Setting addon ingress=true in "addons-335640"
	I1011 20:59:31.689385   19546 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-335640"
	I1011 20:59:31.689402   19546 addons.go:234] Setting addon storage-provisioner=true in "addons-335640"
	I1011 20:59:31.690494   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690519   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690581   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690594   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690630   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690597   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690665   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690676   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690679   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690703   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690802   19546 out.go:177] * Verifying Kubernetes components...
	I1011 20:59:31.690916   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690919   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690981   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691008   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.691020   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691036   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.691108   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.691202   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691228   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.691316   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691360   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.695773   19546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:59:31.707626   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42121
	I1011 20:59:31.708159   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.709285   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.709309   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.709710   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.710707   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I1011 20:59:31.710714   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I1011 20:59:31.711164   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.711208   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.711525   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.711562   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.711169   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.714054   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.714109   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41851
	I1011 20:59:31.715039   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.715160   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.715256   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.715532   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.715546   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.715680   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.715693   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.715821   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.715847   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.716191   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.716252   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.716260   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.716792   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.716826   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.725213   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I1011 20:59:31.725617   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.726114   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.726135   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.726492   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.726685   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.728487   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.728887   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.728933   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.739050   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.739102   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.739058   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.739200   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.746492   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I1011 20:59:31.747007   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.747846   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.747889   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.747975   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I1011 20:59:31.748214   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.748750   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.748790   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.749002   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I1011 20:59:31.753536   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.754114   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.754140   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.754511   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.755117   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.755150   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.755388   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.755705   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I1011 20:59:31.756011   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.756026   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.756094   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.756601   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.756616   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.757004   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.757058   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I1011 20:59:31.757312   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.757430   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.757534   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.758077   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.758122   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.758376   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1011 20:59:31.758469   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.758485   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.758847   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.758920   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.759432   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.759476   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.759707   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.759752   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.759775   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.760178   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.760795   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.760827   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.782590   19546 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1011 20:59:31.783290   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1011 20:59:31.783318   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I1011 20:59:31.783455   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I1011 20:59:31.783522   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I1011 20:59:31.783576   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I1011 20:59:31.783622   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1011 20:59:31.783708   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
	I1011 20:59:31.783758   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I1011 20:59:31.783897   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.783979   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.784136   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.784345   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1011 20:59:31.784360   19546 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1011 20:59:31.784382   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.784753   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.784914   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.784926   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.784982   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.785066   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785078   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.785091   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.785553   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785572   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.785628   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.785662   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.785779   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785789   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.785823   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785833   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.785838   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.786083   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.786100   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.786151   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.786188   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786640   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786644   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786677   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786845   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.786869   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.787291   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.787324   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.788152   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.788828   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.788844   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.788903   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.789606   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.789636   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.789931   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.790519   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.790554   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.791238   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.791584   19546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 20:59:31.791774   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1011 20:59:31.791889   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.791907   19546 addons.go:234] Setting addon default-storageclass=true in "addons-335640"
	I1011 20:59:31.791910   19546 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-335640"
	I1011 20:59:31.791919   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.791936   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I1011 20:59:31.791943   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.791954   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.792096   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.792284   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.792313   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.792445   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.792483   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.792488   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.792699   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.792816   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.792992   19546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:59:31.793004   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 20:59:31.793017   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.793076   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1011 20:59:31.793084   19546 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1011 20:59:31.793098   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.793166   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I1011 20:59:31.793451   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.793747   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.793852   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.794316   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.794331   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.794826   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.794850   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.794985   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.794994   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.795350   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.795572   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.795616   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.795805   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.796306   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.796753   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.796773   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.796899   19546 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1011 20:59:31.797045   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.797074   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.797571   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.797591   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.797778   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.797902   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.797982   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.798179   19546 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1011 20:59:31.798184   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.798195   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1011 20:59:31.798215   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.798320   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.799430   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.799932   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.800386   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.800406   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.800414   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.800993   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.801092   19546 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1011 20:59:31.801214   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.801336   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.802073   19546 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:59:31.802094   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1011 20:59:31.802111   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.802184   19546 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1011 20:59:31.802435   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1011 20:59:31.802818   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.802865   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.803269   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.803290   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.803355   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.803379   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.803457   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.803607   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.803623   19546 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1011 20:59:31.803641   19546 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1011 20:59:31.803664   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.803717   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.803821   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.804422   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.804587   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.806065   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.806272   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:31.806294   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:31.808245   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.808304   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.808327   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.808343   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.808363   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:31.808383   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:31.808390   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:31.808397   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:31.808403   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:31.808637   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.808779   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.808926   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.809173   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.809434   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.809452   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.809494   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:31.809590   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:31.809604   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	W1011 20:59:31.809674   19546 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1011 20:59:31.809721   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.809864   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.809993   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.810133   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.810381   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44055
	I1011 20:59:31.810961   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.811518   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.811533   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.811934   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.812098   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.812316   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I1011 20:59:31.812648   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.812913   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I1011 20:59:31.813120   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.813131   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.813762   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.814359   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.814386   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.815087   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1011 20:59:31.815183   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.815322   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.815690   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.815762   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.815776   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.816051   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.816199   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.817487   19546 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1011 20:59:31.817531   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.817558   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.817894   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.818315   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.818522   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 20:59:31.818540   19546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 20:59:31.818559   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.820953   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.821679   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.822067   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.822097   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.822225   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.822357   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.822514   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.822649   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.822829   19546 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1011 20:59:31.823300   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I1011 20:59:31.823749   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.824167   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.824185   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.824255   19546 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:59:31.824271   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1011 20:59:31.824288   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.824493   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.824654   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.826512   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.827401   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.827865   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.827955   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.827991   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1011 20:59:31.828086   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.828228   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.828385   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.828499   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.829915   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I1011 20:59:31.830209   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.830298   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I1011 20:59:31.830744   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.830760   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.831113   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.831285   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.831721   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.831736   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.832081   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.832159   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.832191   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.832223   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.832337   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1011 20:59:31.833536   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1011 20:59:31.834017   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.835249   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1011 20:59:31.835270   19546 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1011 20:59:31.836323   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1011 20:59:31.836352   19546 out.go:177]   - Using image docker.io/registry:2.8.3
	I1011 20:59:31.837436   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1011 20:59:31.837487   19546 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1011 20:59:31.837499   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1011 20:59:31.837520   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.837781   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41225
	I1011 20:59:31.838145   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I1011 20:59:31.838453   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.838879   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.838892   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.839169   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.839591   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.839619   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.839756   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1011 20:59:31.840524   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.840899   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.840941   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.841109   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.841301   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.841359   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.841502   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.841604   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.841714   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1011 20:59:31.841925   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.841941   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.842281   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.842448   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.842661   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1011 20:59:31.842674   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1011 20:59:31.842686   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.843978   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.844118   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
	I1011 20:59:31.844544   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.844899   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.844916   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.845250   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.845404   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.845513   19546 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1011 20:59:31.846148   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.846490   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.846511   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.846793   19546 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:59:31.846800   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.846807   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1011 20:59:31.846819   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.846796   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.846984   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.847147   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.847259   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.848940   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:31.849299   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.849684   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.849716   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.849829   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.850005   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.850147   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.850265   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.850971   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:31.851917   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1011 20:59:31.853051   19546 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:59:31.853067   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1011 20:59:31.853082   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.855625   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.855980   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.856009   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.856172   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.856352   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.856497   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.856634   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.857840   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I1011 20:59:31.858221   19546 main.go:141] libmachine: () Calling .GetVersion
	W1011 20:59:31.858239   19546 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33242->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.858264   19546 retry.go:31] will retry after 195.444762ms: ssh: handshake failed: read tcp 192.168.39.1:33242->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.858607   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.858638   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.859019   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.859230   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.861010   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.862893   19546 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1011 20:59:31.864383   19546 out.go:177]   - Using image docker.io/busybox:stable
	I1011 20:59:31.865603   19546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:59:31.865624   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1011 20:59:31.865640   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.868493   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.868956   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.868974   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.869139   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.869310   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.869457   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.869576   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	W1011 20:59:31.870144   19546 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33248->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.870172   19546 retry.go:31] will retry after 193.429446ms: ssh: handshake failed: read tcp 192.168.39.1:33248->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.870725   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1011 20:59:31.871018   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.871445   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.871466   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.871755   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.871937   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.873248   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.873987   19546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 20:59:31.874003   19546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 20:59:31.874019   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.876472   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.876810   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.876832   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.876972   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.877138   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.877278   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.877391   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:32.196403   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1011 20:59:32.196428   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1011 20:59:32.199602   19546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:59:32.199692   19546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 20:59:32.230810   19546 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1011 20:59:32.230834   19546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1011 20:59:32.258327   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1011 20:59:32.258347   19546 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1011 20:59:32.277837   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1011 20:59:32.277855   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1011 20:59:32.287783   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1011 20:59:32.305893   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 20:59:32.305912   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1011 20:59:32.308569   19546 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1011 20:59:32.308587   19546 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1011 20:59:32.327745   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:59:32.333696   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:59:32.357074   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:59:32.364167   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 20:59:32.378495   19546 node_ready.go:35] waiting up to 6m0s for node "addons-335640" to be "Ready" ...
	I1011 20:59:32.387230   19546 node_ready.go:49] node "addons-335640" has status "Ready":"True"
	I1011 20:59:32.387249   19546 node_ready.go:38] duration metric: took 8.732384ms for node "addons-335640" to be "Ready" ...
	I1011 20:59:32.387259   19546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:32.390062   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:59:32.402642   19546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:32.412976   19546 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:59:32.412995   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1011 20:59:32.423735   19546 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1011 20:59:32.423753   19546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1011 20:59:32.480896   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:59:32.510218   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1011 20:59:32.510236   19546 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1011 20:59:32.535556   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1011 20:59:32.535581   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1011 20:59:32.579994   19546 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:59:32.580020   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1011 20:59:32.671703   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 20:59:32.671723   19546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 20:59:32.673061   19546 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1011 20:59:32.673075   19546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1011 20:59:32.675649   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1011 20:59:32.675663   19546 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1011 20:59:32.693304   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:59:32.749313   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1011 20:59:32.749344   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1011 20:59:32.784844   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:59:32.842750   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:59:32.842777   19546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 20:59:32.859259   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:59:32.859280   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1011 20:59:32.882999   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:59:32.924624   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1011 20:59:32.924643   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1011 20:59:32.959502   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1011 20:59:32.959522   19546 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1011 20:59:33.048908   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:59:33.065364   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:59:33.178985   19546 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:33.179010   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1011 20:59:33.181022   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1011 20:59:33.181040   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1011 20:59:33.490032   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1011 20:59:33.490057   19546 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1011 20:59:33.532631   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:33.726842   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1011 20:59:33.726863   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1011 20:59:34.043299   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1011 20:59:34.043318   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1011 20:59:34.369693   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:59:34.369715   19546 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1011 20:59:34.435832   19546 pod_ready.go:103] pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:34.736855   19546 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.537125987s)
	I1011 20:59:34.736881   19546 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1011 20:59:34.830333   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:59:34.932968   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.60519175s)
	I1011 20:59:34.933016   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933026   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933051   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.645237718s)
	I1011 20:59:34.933086   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933104   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933301   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933311   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933320   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:34.933323   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:34.933329   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933336   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933342   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933349   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933534   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933586   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:34.933598   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:34.933630   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:34.933641   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933651   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:35.241587   19546 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-335640" context rescaled to 1 replicas
	I1011 20:59:36.514579   19546 pod_ready.go:103] pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:36.629937   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.296211247s)
	I1011 20:59:36.629995   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.629997   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.272895818s)
	I1011 20:59:36.630014   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630031   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630038   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.265849132s)
	I1011 20:59:36.630049   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630057   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630070   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630116   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.240029822s)
	I1011 20:59:36.630148   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630159   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630378   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630391   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630425   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630430   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630434   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630445   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630430   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630449   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630456   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630468   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630477   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630477   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630485   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630410   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630492   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630499   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630508   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630490   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630410   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630514   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630860   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630884   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630883   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630892   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630915   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630922   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630930   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630937   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.631958   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.631986   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.631993   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.753902   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.753925   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.754146   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.754162   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:38.832790   19546 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1011 20:59:38.832835   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:38.836274   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:38.836748   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:38.836778   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:38.836984   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:38.837188   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:38.837357   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:38.837513   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:38.978581   19546 pod_ready.go:93] pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:38.978606   19546 pod_ready.go:82] duration metric: took 6.575937129s for pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:38.978628   19546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:39.156654   19546 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1011 20:59:39.285104   19546 addons.go:234] Setting addon gcp-auth=true in "addons-335640"
	I1011 20:59:39.285156   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:39.285490   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:39.285519   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:39.300693   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I1011 20:59:39.301272   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:39.301793   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:39.301816   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:39.302143   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:39.302628   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:39.302656   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:39.316876   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I1011 20:59:39.317228   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:39.317653   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:39.317676   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:39.317988   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:39.318191   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:39.319625   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:39.319818   19546 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1011 20:59:39.319838   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:39.322281   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:39.322662   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:39.322687   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:39.322795   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:39.322946   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:39.323083   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:39.323187   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:40.265296   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.571961251s)
	I1011 20:59:40.265348   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265349   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.784421586s)
	I1011 20:59:40.265372   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265381   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265399   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265382   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.480507112s)
	I1011 20:59:40.265440   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.382423621s)
	I1011 20:59:40.265472   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265482   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265451   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265514   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265518   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.216575985s)
	I1011 20:59:40.265537   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265548   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265667   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.200269042s)
	I1011 20:59:40.265688   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265697   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265843   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.733178263s)
	W1011 20:59:40.265869   19546 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:40.265899   19546 retry.go:31] will retry after 274.780509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:40.265976   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.265991   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266004   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266028   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266034   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266041   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266047   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266095   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266100   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266107   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266115   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266165   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266172   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266180   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266185   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266413   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266434   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266455   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266458   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266465   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266466   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266474   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266482   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266474   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266520   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266711   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266727   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266791   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266798   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266806   19546 addons.go:475] Verifying addon registry=true in "addons-335640"
	I1011 20:59:40.266996   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267004   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267699   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267710   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267718   19546 addons.go:475] Verifying addon metrics-server=true in "addons-335640"
	I1011 20:59:40.267802   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.267834   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.267836   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267849   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267854   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267856   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267863   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267865   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267871   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.267874   19546 addons.go:475] Verifying addon ingress=true in "addons-335640"
	I1011 20:59:40.267878   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.268151   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.268170   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.268680   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.268915   19546 out.go:177] * Verifying registry addon...
	I1011 20:59:40.269991   19546 out.go:177] * Verifying ingress addon...
	I1011 20:59:40.270823   19546 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-335640 service yakd-dashboard -n yakd-dashboard
	
	I1011 20:59:40.272729   19546 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1011 20:59:40.273933   19546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1011 20:59:40.309895   19546 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:40.309918   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.310256   19546 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1011 20:59:40.310277   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.342406   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.342427   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.342701   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.342768   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.541814   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:40.785995   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.787614   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.063310   19546 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:41.280842   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.281430   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.591469   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.761086239s)
	I1011 20:59:41.591505   19546 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.271665694s)
	I1011 20:59:41.591521   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:41.591539   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:41.591809   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:41.591834   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:41.591840   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:41.591847   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:41.591856   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:41.592126   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:41.592197   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:41.592215   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:41.592224   19546 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-335640"
	I1011 20:59:41.593332   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:41.594222   19546 out.go:177] * Verifying csi-hostpath-driver addon...
	I1011 20:59:41.595990   19546 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1011 20:59:41.596706   19546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1011 20:59:41.597298   19546 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1011 20:59:41.597317   19546 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1011 20:59:41.627782   19546 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:41.627802   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.665781   19546 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1011 20:59:41.665806   19546 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1011 20:59:41.739758   19546 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:41.739783   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1011 20:59:41.782477   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.782709   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.826006   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:42.101157   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.282768   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.283945   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.448155   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.906288354s)
	I1011 20:59:42.448208   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:42.448222   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:42.448490   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:42.448539   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:42.448564   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:42.448580   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:42.448588   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:42.448808   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:42.448833   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:42.448843   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:42.602424   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.779678   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.780288   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.123368   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.170907   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.344861361s)
	I1011 20:59:43.170953   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:43.170967   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:43.171242   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:43.171255   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:43.171269   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:43.171278   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:43.171311   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:43.171543   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:43.171562   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:43.171564   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:43.173567   19546 addons.go:475] Verifying addon gcp-auth=true in "addons-335640"
	I1011 20:59:43.175083   19546 out.go:177] * Verifying gcp-auth addon...
	I1011 20:59:43.177037   19546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1011 20:59:43.250440   19546 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1011 20:59:43.250462   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.349476   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.349614   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.484792   19546 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f7488" not found
	I1011 20:59:43.484816   19546 pod_ready.go:82] duration metric: took 4.506180283s for pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace to be "Ready" ...
	E1011 20:59:43.484827   19546 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f7488" not found
	I1011 20:59:43.484834   19546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.497842   19546 pod_ready.go:93] pod "etcd-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.497875   19546 pod_ready.go:82] duration metric: took 13.030373ms for pod "etcd-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.497888   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.506654   19546 pod_ready.go:93] pod "kube-apiserver-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.506679   19546 pod_ready.go:82] duration metric: took 8.78243ms for pod "kube-apiserver-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.506690   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.518732   19546 pod_ready.go:93] pod "kube-controller-manager-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.518754   19546 pod_ready.go:82] duration metric: took 12.056359ms for pod "kube-controller-manager-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.518766   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjszr" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.526272   19546 pod_ready.go:93] pod "kube-proxy-pjszr" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.526294   19546 pod_ready.go:82] duration metric: took 7.516668ms for pod "kube-proxy-pjszr" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.526306   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.601885   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.682392   19546 pod_ready.go:93] pod "kube-scheduler-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.682412   19546 pod_ready.go:82] duration metric: took 156.091647ms for pod "kube-scheduler-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.682419   19546 pod_ready.go:39] duration metric: took 11.295148573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:43.682434   19546 api_server.go:52] waiting for apiserver process to appear ...
	I1011 20:59:43.682492   19546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 20:59:43.703565   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.719787   19546 api_server.go:72] duration metric: took 12.030598076s to wait for apiserver process to appear ...
	I1011 20:59:43.719813   19546 api_server.go:88] waiting for apiserver healthz status ...
	I1011 20:59:43.719835   19546 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1011 20:59:43.724897   19546 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I1011 20:59:43.726292   19546 api_server.go:141] control plane version: v1.31.1
	I1011 20:59:43.726314   19546 api_server.go:131] duration metric: took 6.493799ms to wait for apiserver health ...
	I1011 20:59:43.726322   19546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 20:59:43.778701   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.778701   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.889202   19546 system_pods.go:59] 18 kube-system pods found
	I1011 20:59:43.889229   19546 system_pods.go:61] "amd-gpu-device-plugin-9lfb2" [5e9f5699-a31f-43bd-9cc8-96ce96a3c580] Running
	I1011 20:59:43.889236   19546 system_pods.go:61] "coredns-7c65d6cfc9-c8225" [8bfebaba-1d36-43d9-81be-28300ec9e5f1] Running
	I1011 20:59:43.889242   19546 system_pods.go:61] "csi-hostpath-attacher-0" [935d8f6e-845b-4c20-b293-05d78c9d6470] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1011 20:59:43.889249   19546 system_pods.go:61] "csi-hostpath-resizer-0" [8c4e1169-a2c9-4ef7-bd1d-0f34c0779f64] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1011 20:59:43.889259   19546 system_pods.go:61] "csi-hostpathplugin-5bbrd" [15c420a2-bc23-4178-a7e7-424c14f1cdee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1011 20:59:43.889263   19546 system_pods.go:61] "etcd-addons-335640" [a5f2fe46-b853-4d7d-b18c-877e9328560c] Running
	I1011 20:59:43.889267   19546 system_pods.go:61] "kube-apiserver-addons-335640" [a1d31822-8e1b-4983-9b71-678270e37220] Running
	I1011 20:59:43.889271   19546 system_pods.go:61] "kube-controller-manager-addons-335640" [871e3fb0-541c-49fb-b7cc-b52516a8ccb2] Running
	I1011 20:59:43.889276   19546 system_pods.go:61] "kube-ingress-dns-minikube" [04cb67bc-78e9-4d22-8172-f3d24200627e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1011 20:59:43.889280   19546 system_pods.go:61] "kube-proxy-pjszr" [e3663ee2-aeb3-4c62-a737-e095cc1897aa] Running
	I1011 20:59:43.889284   19546 system_pods.go:61] "kube-scheduler-addons-335640" [d102a4da-3781-4045-b2a0-0984be417b76] Running
	I1011 20:59:43.889289   19546 system_pods.go:61] "metrics-server-84c5f94fbc-zmj4b" [8ec1bee3-86d5-4b1b-ba8e-96e9786005cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 20:59:43.889298   19546 system_pods.go:61] "nvidia-device-plugin-daemonset-4rwwd" [fdff7711-2b34-4674-b560-4769911e0b24] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1011 20:59:43.889306   19546 system_pods.go:61] "registry-66c9cd494c-fscdh" [b7eae652-7687-4daf-bcb5-ba3501d88f5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1011 20:59:43.889312   19546 system_pods.go:61] "registry-proxy-9bpbj" [ce628b0d-73e1-4fa3-a071-c9091c1ae2ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1011 20:59:43.889317   19546 system_pods.go:61] "snapshot-controller-56fcc65765-tx42p" [9011781b-8f93-423b-bd92-d3df096f9a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:43.889324   19546 system_pods.go:61] "snapshot-controller-56fcc65765-wtz96" [ecfec52c-a8f2-454d-8a60-688497d37e44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:43.889329   19546 system_pods.go:61] "storage-provisioner" [e3064aeb-986a-48a2-9387-5a63fa2360bb] Running
	I1011 20:59:43.889335   19546 system_pods.go:74] duration metric: took 163.008614ms to wait for pod list to return data ...
	I1011 20:59:43.889342   19546 default_sa.go:34] waiting for default service account to be created ...
	I1011 20:59:44.082350   19546 default_sa.go:45] found service account: "default"
	I1011 20:59:44.082376   19546 default_sa.go:55] duration metric: took 193.025411ms for default service account to be created ...
	I1011 20:59:44.082386   19546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 20:59:44.101504   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.180781   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.277560   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.279316   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:44.286356   19546 system_pods.go:86] 18 kube-system pods found
	I1011 20:59:44.286376   19546 system_pods.go:89] "amd-gpu-device-plugin-9lfb2" [5e9f5699-a31f-43bd-9cc8-96ce96a3c580] Running
	I1011 20:59:44.286381   19546 system_pods.go:89] "coredns-7c65d6cfc9-c8225" [8bfebaba-1d36-43d9-81be-28300ec9e5f1] Running
	I1011 20:59:44.286388   19546 system_pods.go:89] "csi-hostpath-attacher-0" [935d8f6e-845b-4c20-b293-05d78c9d6470] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1011 20:59:44.286394   19546 system_pods.go:89] "csi-hostpath-resizer-0" [8c4e1169-a2c9-4ef7-bd1d-0f34c0779f64] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1011 20:59:44.286402   19546 system_pods.go:89] "csi-hostpathplugin-5bbrd" [15c420a2-bc23-4178-a7e7-424c14f1cdee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1011 20:59:44.286409   19546 system_pods.go:89] "etcd-addons-335640" [a5f2fe46-b853-4d7d-b18c-877e9328560c] Running
	I1011 20:59:44.286414   19546 system_pods.go:89] "kube-apiserver-addons-335640" [a1d31822-8e1b-4983-9b71-678270e37220] Running
	I1011 20:59:44.286417   19546 system_pods.go:89] "kube-controller-manager-addons-335640" [871e3fb0-541c-49fb-b7cc-b52516a8ccb2] Running
	I1011 20:59:44.286427   19546 system_pods.go:89] "kube-ingress-dns-minikube" [04cb67bc-78e9-4d22-8172-f3d24200627e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1011 20:59:44.286431   19546 system_pods.go:89] "kube-proxy-pjszr" [e3663ee2-aeb3-4c62-a737-e095cc1897aa] Running
	I1011 20:59:44.286437   19546 system_pods.go:89] "kube-scheduler-addons-335640" [d102a4da-3781-4045-b2a0-0984be417b76] Running
	I1011 20:59:44.286442   19546 system_pods.go:89] "metrics-server-84c5f94fbc-zmj4b" [8ec1bee3-86d5-4b1b-ba8e-96e9786005cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 20:59:44.286449   19546 system_pods.go:89] "nvidia-device-plugin-daemonset-4rwwd" [fdff7711-2b34-4674-b560-4769911e0b24] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1011 20:59:44.286454   19546 system_pods.go:89] "registry-66c9cd494c-fscdh" [b7eae652-7687-4daf-bcb5-ba3501d88f5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1011 20:59:44.286463   19546 system_pods.go:89] "registry-proxy-9bpbj" [ce628b0d-73e1-4fa3-a071-c9091c1ae2ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1011 20:59:44.286470   19546 system_pods.go:89] "snapshot-controller-56fcc65765-tx42p" [9011781b-8f93-423b-bd92-d3df096f9a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:44.286476   19546 system_pods.go:89] "snapshot-controller-56fcc65765-wtz96" [ecfec52c-a8f2-454d-8a60-688497d37e44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:44.286481   19546 system_pods.go:89] "storage-provisioner" [e3064aeb-986a-48a2-9387-5a63fa2360bb] Running
	I1011 20:59:44.286488   19546 system_pods.go:126] duration metric: took 204.096425ms to wait for k8s-apps to be running ...
	I1011 20:59:44.286496   19546 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 20:59:44.286535   19546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 20:59:44.302602   19546 system_svc.go:56] duration metric: took 16.102828ms WaitForService to wait for kubelet
	I1011 20:59:44.302630   19546 kubeadm.go:582] duration metric: took 12.613443676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:59:44.302649   19546 node_conditions.go:102] verifying NodePressure condition ...
	I1011 20:59:44.482586   19546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 20:59:44.482629   19546 node_conditions.go:123] node cpu capacity is 2
	I1011 20:59:44.482646   19546 node_conditions.go:105] duration metric: took 179.989874ms to run NodePressure ...
	I1011 20:59:44.482657   19546 start.go:241] waiting for startup goroutines ...
	I1011 20:59:44.602468   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.680649   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.777026   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.778223   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.102303   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.180492   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.278526   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.278848   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.601042   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.681299   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.777535   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.778416   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.101316   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.180469   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.278668   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.278692   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.601216   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.680857   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.778350   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.778796   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.101061   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.180142   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.278280   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:47.278404   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.600990   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.679821   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.777408   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.778196   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.323238   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.323343   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.323438   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.323495   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.602022   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.681614   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.779220   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.779346   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.100983   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.179899   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.279084   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.279588   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.601768   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.680781   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.777833   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.778405   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.101629   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.181276   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.279072   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.279203   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.600831   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.681140   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.778528   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.779250   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.101266   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.200850   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.302405   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.302591   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:51.600886   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.681458   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.777902   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.778603   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.101695   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.180298   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.277415   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.280852   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.601565   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.680858   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.777815   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.778921   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.100913   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.180999   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.277282   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.278290   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.601023   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.680370   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.778935   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.778978   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.101275   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.180646   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.277835   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.278481   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.601313   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.680854   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.778097   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.778506   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.101360   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.180922   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.278290   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:55.278948   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.602076   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.680758   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.777246   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.778654   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.101967   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.318584   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.319561   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.320410   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.601022   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.680289   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.777706   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.778548   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.101131   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.180836   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.277731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.278035   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.602112   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.680373   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.778295   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.778571   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.101811   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.181346   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.278293   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.278469   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.601366   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.680961   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.778586   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.779010   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.102082   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.230880   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.279002   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.279352   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.604348   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.703011   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.778106   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.778145   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.102259   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.180423   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.277713   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.278153   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.601855   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.681062   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.777450   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.777727   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.102357   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.180322   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.278702   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.280800   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.602072   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.680796   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.777530   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.779283   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.102354   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.180563   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.278869   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.279212   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.602870   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.681704   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.778284   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.778396   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.102957   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.181010   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.279187   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.279367   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.602880   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.681769   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.778408   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.779145   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.102863   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.181412   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.278836   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:04.278972   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.602203   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.681414   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.778235   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.778785   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.101772   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.180966   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.278318   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.278571   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.602286   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.681731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.778557   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.779210   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.241356   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.241661   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.279024   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.279246   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.600945   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.680752   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.779105   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.779353   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.101900   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.180244   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.279695   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.279739   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.603285   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.680374   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.779212   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.780280   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.104401   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.181284   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.549622   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:08.550106   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.601472   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.681267   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.779965   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.780761   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.102810   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.182004   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.278260   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.278949   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.601887   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.681292   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.780282   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.780426   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.102328   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.183069   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.284196   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.284968   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.602275   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.681159   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.777949   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.778114   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.101731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.181926   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.284165   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.284394   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.601352   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.680786   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.778470   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.778488   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.101929   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.206812   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.278896   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.279080   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.603298   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.680466   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.779513   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.779890   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.102788   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.180633   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.278051   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.278194   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.602645   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.702479   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.778950   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.779518   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.102577   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.180727   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.280096   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.286176   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.606858   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.681029   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.777984   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.778431   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.102385   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.180465   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.666215   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:15.666988   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.769542   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.770598   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.870962   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.874558   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.105711   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.180938   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.278839   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.279399   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.601906   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.681267   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.777529   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.778250   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.102572   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.181090   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.277851   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.278126   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.607506   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.681236   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.777810   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.778090   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.101913   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.181295   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.278536   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.278853   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.602471   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.681058   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.779507   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.779662   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.102086   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.181480   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.279376   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.279519   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.601942   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.680808   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.779044   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.779359   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.102046   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.181063   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.279098   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.279207   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.602041   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.681986   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.779796   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.779894   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.101411   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.180963   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.278891   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.279160   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.637418   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.681069   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.778746   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.779126   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.102279   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.180649   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:22.278528   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.278850   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.601289   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.681029   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:22.777590   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.778589   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.102198   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.182123   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.277603   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.278052   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.602581   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.680823   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.777967   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.778014   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.100884   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.181632   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.277763   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.278004   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:24.601167   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.680633   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.779080   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.779439   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.102093   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.180706   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.278009   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.279127   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:25.600933   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.680478   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.779536   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.779689   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.102221   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.181563   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.277859   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.279104   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.601390   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.939567   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.939719   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.939888   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.101785   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.181158   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.278246   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:27.279624   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.601685   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.680817   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.777683   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.778574   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.102337   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.180519   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.280021   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.280344   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:28.601489   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.680750   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.778301   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.778651   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.101771   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.180935   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.277836   19546 kapi.go:107] duration metric: took 49.00389833s to wait for kubernetes.io/minikube-addons=registry ...
	I1011 21:00:29.278069   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.601770   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.681011   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.778457   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.102338   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.179988   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.278857   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.602716   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.680968   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.781397   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.102325   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.182038   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.278187   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.601203   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.680393   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.777601   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.101237   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.181237   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.277926   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.601958   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.679826   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.777279   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.102510   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.180637   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.277580   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.601103   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.680583   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.778661   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.101835   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.180767   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.277058   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.602168   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.680763   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.777242   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.108549   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.186928   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.277602   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.601674   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.681855   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.778295   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.101953   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.179908   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.277472   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.601038   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.680412   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.777479   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.102180   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.183660   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.278041   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.604315   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.680818   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.777763   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.107029   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.296155   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.296756   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.620957   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.709691   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.777660   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:39.103640   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.180441   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.277775   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:39.601665   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.681143   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.790189   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:40.112788   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.181407   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.278834   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:40.602247   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.680496   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.779851   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:41.102691   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.181181   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.278109   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:41.602250   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.680549   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.778251   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:42.104023   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.205549   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.305335   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:42.602290   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.680980   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.777679   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:43.102345   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.181129   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.279731   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:43.602542   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.680160   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.777834   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:44.102075   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.202231   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.277736   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:44.601913   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.680089   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.779807   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:45.102517   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.181144   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.277753   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:45.607839   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.682298   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.784365   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:46.102431   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.186415   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.278401   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:46.601951   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.680766   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.777869   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:47.101304   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.180618   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.278879   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:47.602338   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.680269   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.780804   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:48.102150   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.201526   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.304379   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:48.602791   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.681712   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.777454   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:49.101148   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.180865   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.277688   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:49.602182   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.680806   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.777753   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:50.102305   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.179878   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.278477   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:50.601113   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.680458   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.777924   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:51.102696   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.180998   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:51.277301   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:51.601832   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.680830   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:51.778040   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:52.405620   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.406580   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:52.406848   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:52.601793   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.680901   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:52.778061   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:53.101539   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.180590   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:53.277189   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:53.600896   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.679950   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:53.777613   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:54.101586   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:54.180332   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:54.278181   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:54.603334   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:54.680926   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:54.778008   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:55.109583   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:55.208417   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:55.278126   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:55.601331   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:55.680880   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:55.777763   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:56.101886   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:56.202116   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:56.305609   19546 kapi.go:107] duration metric: took 1m16.032872891s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1011 21:00:56.601786   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:56.681914   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:57.102006   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:57.201713   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:57.601778   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:57.682313   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:58.102130   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:58.183764   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:58.601716   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:58.701625   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:59.103815   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:59.202903   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:59.600776   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:59.681705   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:00.101731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:00.181410   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:00.601818   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:00.681311   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:01.101917   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:01.181449   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:01.603801   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:01.683494   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:02.101681   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:02.181376   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:02.602661   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:02.681056   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:03.101879   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:03.181033   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:03.602332   19546 kapi.go:107] duration metric: took 1m22.005619714s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1011 21:01:03.679976   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:04.180635   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:04.681299   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:05.181569   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:05.680409   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:06.181200   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:06.680948   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:07.181127   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:07.680339   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:08.181258   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:08.680825   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:09.183465   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:09.680543   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:10.181680   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:10.680863   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:11.181462   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:11.681747   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:12.181535   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:12.681613   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:13.181334   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:13.681397   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:14.180902   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:14.681836   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:15.181405   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:15.681032   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:16.180958   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:16.681574   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:17.181417   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:17.681553   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:18.181725   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:18.681386   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:19.180756   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:19.681496   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:20.181934   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:20.681209   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:21.181559   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:21.681309   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:22.181052   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:22.680474   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:23.181273   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:23.680992   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:24.180560   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:24.680783   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:25.181133   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:25.681147   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:26.180930   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:26.680630   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:27.181283   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:27.681073   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:28.181121   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:28.680584   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:29.182421   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:29.681279   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:30.181999   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:30.681719   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:31.181004   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:31.680915   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:32.180724   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:32.681941   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:33.181216   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:33.683315   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:34.181671   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:34.682225   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:35.182292   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:35.681933   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:36.181627   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:36.681210   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:37.180588   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:37.681961   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:38.181545   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:38.681209   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:39.181035   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:39.681389   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:40.182694   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:40.680637   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:41.181450   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:41.681020   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:42.180481   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:42.681377   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:43.181015   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:43.681413   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:44.181094   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:44.680957   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:45.181007   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:45.681065   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:46.180839   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:46.680051   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:47.180853   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:47.680314   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:48.181385   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:48.681076   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:49.181120   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:49.680937   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:50.181052   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:50.680808   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:51.181202   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:51.681473   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:52.182374   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:52.681078   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:53.181046   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:53.681157   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:54.180998   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:54.680495   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:55.181234   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:55.682036   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:56.181563   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:56.681035   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:57.181712   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:57.681192   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:58.181240   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:58.680409   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:59.180912   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:59.681325   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:00.181341   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:00.681641   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:01.181161   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:01.680999   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:02.180249   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:02.680631   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:03.181370   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:03.681001   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:04.180788   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:04.682209   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:05.181190   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:05.682287   19546 kapi.go:107] duration metric: took 2m22.50524239s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1011 21:02:05.683755   19546 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-335640 cluster.
	I1011 21:02:05.685316   19546 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1011 21:02:05.686666   19546 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1011 21:02:05.688138   19546 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1011 21:02:05.689465   19546 addons.go:510] duration metric: took 2m34.000257135s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin default-storageclass metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1011 21:02:05.689505   19546 start.go:246] waiting for cluster config update ...
	I1011 21:02:05.689525   19546 start.go:255] writing updated cluster config ...
	I1011 21:02:05.689856   19546 ssh_runner.go:195] Run: rm -f paused
	I1011 21:02:05.744262   19546 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:02:05.746153   19546 out.go:177] * Done! kubectl is now configured to use "addons-335640" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.758780058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680729758749707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587583,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ecd7454-1f21-4fca-89ef-f1997e203c67 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.759537327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f94fb5bf-7318-408e-ad7a-47f47e9f5015 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.759593469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f94fb5bf-7318-408e-ad7a-47f47e9f5015 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.759917744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-bedc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd378be13b2e0ce5c98ba37bf1d8faef8a035f7e5386617d9ac07a7d4bac315,PodSandboxId:56a9402d35fbadf2111ad2f4632f8664d20c68f8659de0b5afa8d274bd071987,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1728680454992477618,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-thztz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: adf54042-57e8-424c-a853-34729662ac6b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c6f7e1f29b454406254620ce37eec63118794d234db3bcca5f0fb3bea6e5269,PodSandboxId:1b3d3d93d1062b8ac5b0da40d1ded8f4a90bf872ecd962edf0390637ff7ec791,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1728680443152888453,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hxs7q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c554b94-4174-4108-8c25-a93ff2ec57fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b93922f79482077efa07df526614a4871c342c6fceff72104f68137b611852,PodSandboxId:65f5bf5ad8b613b017127bc68388d34a5981fc0d432063ab9561f1144cfc6cc2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1728680442181258460,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7d68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52078e75-79c6-4fb7-aa3f-c873441e6f8b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888f3773f064df870c937b1be6492427289523e54ecd3a9762480bad04b9606f,PodSandboxId:c3ea47089449e255822c9c8c6cf41583a996771ef1dadfa6c1aac217fc2e0ed1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728680391125722390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04cb67bc-78e9-4d22-8172-f3d24200627e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde03
04b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62
770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c6347
8d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988
caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b2
1c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=f94fb5bf-7318-408e-ad7a-47f47e9f5015 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.798559512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1959299b-db6c-4dd0-9872-66de586ef847 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.798632098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1959299b-db6c-4dd0-9872-66de586ef847 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.800038137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f01573ba-b2e2-4eec-870b-0f9f3c4a9fcb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.801304662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680729801276600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587583,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f01573ba-b2e2-4eec-870b-0f9f3c4a9fcb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.802211884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3eebd38-f6fb-4a05-b6db-9d964016fe7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.802343581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3eebd38-f6fb-4a05-b6db-9d964016fe7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.802707439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-bedc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd378be13b2e0ce5c98ba37bf1d8faef8a035f7e5386617d9ac07a7d4bac315,PodSandboxId:56a9402d35fbadf2111ad2f4632f8664d20c68f8659de0b5afa8d274bd071987,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1728680454992477618,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-thztz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: adf54042-57e8-424c-a853-34729662ac6b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c6f7e1f29b454406254620ce37eec63118794d234db3bcca5f0fb3bea6e5269,PodSandboxId:1b3d3d93d1062b8ac5b0da40d1ded8f4a90bf872ecd962edf0390637ff7ec791,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1728680443152888453,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hxs7q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c554b94-4174-4108-8c25-a93ff2ec57fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b93922f79482077efa07df526614a4871c342c6fceff72104f68137b611852,PodSandboxId:65f5bf5ad8b613b017127bc68388d34a5981fc0d432063ab9561f1144cfc6cc2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1728680442181258460,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7d68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52078e75-79c6-4fb7-aa3f-c873441e6f8b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888f3773f064df870c937b1be6492427289523e54ecd3a9762480bad04b9606f,PodSandboxId:c3ea47089449e255822c9c8c6cf41583a996771ef1dadfa6c1aac217fc2e0ed1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728680391125722390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04cb67bc-78e9-4d22-8172-f3d24200627e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde03
04b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62
770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c6347
8d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988
caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b2
1c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=c3eebd38-f6fb-4a05-b6db-9d964016fe7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.838773344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f134eed-9f28-49c4-b5a6-738ba9cf00d2 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.838945345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f134eed-9f28-49c4-b5a6-738ba9cf00d2 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.840448996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d018c871-c152-4f86-9a95-0749a4e99f7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.846512106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680729846478014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587583,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d018c871-c152-4f86-9a95-0749a4e99f7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.847488180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddd3a50f-8bbe-4ce3-8dc9-64bb9bc772c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.847612934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddd3a50f-8bbe-4ce3-8dc9-64bb9bc772c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.848008323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-bedc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd378be13b2e0ce5c98ba37bf1d8faef8a035f7e5386617d9ac07a7d4bac315,PodSandboxId:56a9402d35fbadf2111ad2f4632f8664d20c68f8659de0b5afa8d274bd071987,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1728680454992477618,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-thztz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: adf54042-57e8-424c-a853-34729662ac6b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c6f7e1f29b454406254620ce37eec63118794d234db3bcca5f0fb3bea6e5269,PodSandboxId:1b3d3d93d1062b8ac5b0da40d1ded8f4a90bf872ecd962edf0390637ff7ec791,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1728680443152888453,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hxs7q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c554b94-4174-4108-8c25-a93ff2ec57fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b93922f79482077efa07df526614a4871c342c6fceff72104f68137b611852,PodSandboxId:65f5bf5ad8b613b017127bc68388d34a5981fc0d432063ab9561f1144cfc6cc2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1728680442181258460,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7d68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52078e75-79c6-4fb7-aa3f-c873441e6f8b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888f3773f064df870c937b1be6492427289523e54ecd3a9762480bad04b9606f,PodSandboxId:c3ea47089449e255822c9c8c6cf41583a996771ef1dadfa6c1aac217fc2e0ed1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728680391125722390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04cb67bc-78e9-4d22-8172-f3d24200627e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde03
04b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62
770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c6347
8d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988
caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b2
1c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=ddd3a50f-8bbe-4ce3-8dc9-64bb9bc772c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.882214657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=befc3979-2c6b-4f3f-9ebf-76290eafa6d1 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.882295708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=befc3979-2c6b-4f3f-9ebf-76290eafa6d1 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.883561183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9688e024-e5eb-4117-ad17-d62c4e604542 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.884695292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680729884670167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587583,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9688e024-e5eb-4117-ad17-d62c4e604542 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.885397478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68cf5242-dddc-4888-8a80-62dd4b9583a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.885505590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68cf5242-dddc-4888-8a80-62dd4b9583a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:05:29 addons-335640 crio[665]: time="2024-10-11 21:05:29.885920684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-bedc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd378be13b2e0ce5c98ba37bf1d8faef8a035f7e5386617d9ac07a7d4bac315,PodSandboxId:56a9402d35fbadf2111ad2f4632f8664d20c68f8659de0b5afa8d274bd071987,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1728680454992477618,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-thztz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: adf54042-57e8-424c-a853-34729662ac6b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c6f7e1f29b454406254620ce37eec63118794d234db3bcca5f0fb3bea6e5269,PodSandboxId:1b3d3d93d1062b8ac5b0da40d1ded8f4a90bf872ecd962edf0390637ff7ec791,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1728680443152888453,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hxs7q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c554b94-4174-4108-8c25-a93ff2ec57fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7b93922f79482077efa07df526614a4871c342c6fceff72104f68137b611852,PodSandboxId:65f5bf5ad8b613b017127bc68388d34a5981fc0d432063ab9561f1144cfc6cc2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1728680442181258460,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7d68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52078e75-79c6-4fb7-aa3f-c873441e6f8b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888f3773f064df870c937b1be6492427289523e54ecd3a9762480bad04b9606f,PodSandboxId:c3ea47089449e255822c9c8c6cf41583a996771ef1dadfa6c1aac217fc2e0ed1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728680391125722390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04cb67bc-78e9-4d22-8172-f3d24200627e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde03
04b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62
770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c6347
8d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988
caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b2
1c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=68cf5242-dddc-4888-8a80-62dd4b9583a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bc36dea0fbfd1       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   fb931a82e05b9       nginx
	209691f7026aa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   a82d83d3733b2       busybox
	8dd378be13b2e       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   56a9402d35fba       ingress-nginx-controller-5f85ff4588-thztz
	6c6f7e1f29b45       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   1b3d3d93d1062       ingress-nginx-admission-patch-hxs7q
	a7b93922f7948       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   65f5bf5ad8b61       ingress-nginx-admission-create-c7d68
	77af1353666e7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        5 minutes ago       Running             metrics-server            0                   99742615d4255       metrics-server-84c5f94fbc-zmj4b
	888f3773f064d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   c3ea47089449e       kube-ingress-dns-minikube
	f2b8178d0fe50       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   4ee7875e4d7a1       amd-gpu-device-plugin-9lfb2
	f16d688ebd563       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   d70591db6b44a       storage-provisioner
	868280db0f1ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   ac050a37bd2a4       coredns-7c65d6cfc9-c8225
	06f0c4117abe8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             5 minutes ago       Running             kube-proxy                0                   a7702246bf4fd       kube-proxy-pjszr
	30fc88697faa0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             6 minutes ago       Running             kube-scheduler            0                   c3d3376cc1c0d       kube-scheduler-addons-335640
	42d6d5988caa0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             6 minutes ago       Running             kube-apiserver            0                   04f67c71acadc       kube-apiserver-addons-335640
	6954d994f9340       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             6 minutes ago       Running             etcd                      0                   c262f51d7b83e       etcd-addons-335640
	3baf00379ca6e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             6 minutes ago       Running             kube-controller-manager   0                   3d278f46c6072       kube-controller-manager-addons-335640
	
	
	==> coredns [868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a] <==
	[INFO] 10.244.0.8:56700 - 44489 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000111339s
	[INFO] 10.244.0.8:56700 - 43684 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000119511s
	[INFO] 10.244.0.8:56700 - 27139 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000142511s
	[INFO] 10.244.0.8:56700 - 39911 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000304769s
	[INFO] 10.244.0.8:56700 - 39666 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000174905s
	[INFO] 10.244.0.8:56700 - 40551 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000134719s
	[INFO] 10.244.0.8:56700 - 43375 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000244195s
	[INFO] 10.244.0.8:43230 - 29229 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085835s
	[INFO] 10.244.0.8:43230 - 28963 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037208s
	[INFO] 10.244.0.8:46378 - 44638 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043106s
	[INFO] 10.244.0.8:46378 - 44403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042383s
	[INFO] 10.244.0.8:36979 - 37556 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034401s
	[INFO] 10.244.0.8:36979 - 37333 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029705s
	[INFO] 10.244.0.8:32977 - 47202 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000034177s
	[INFO] 10.244.0.8:32977 - 47024 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036415s
	[INFO] 10.244.0.23:54166 - 64842 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000614619s
	[INFO] 10.244.0.23:43035 - 46497 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000084967s
	[INFO] 10.244.0.23:39763 - 1843 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120125s
	[INFO] 10.244.0.23:34203 - 44741 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063203s
	[INFO] 10.244.0.23:57178 - 10689 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165915s
	[INFO] 10.244.0.23:32828 - 62709 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069049s
	[INFO] 10.244.0.23:34006 - 30441 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003548906s
	[INFO] 10.244.0.23:47099 - 38592 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00769292s
	[INFO] 10.244.0.27:53009 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000725543s
	[INFO] 10.244.0.27:57987 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145464s
	
	
	==> describe nodes <==
	Name:               addons-335640
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-335640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=addons-335640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T20_59_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-335640
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 20:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-335640
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:05:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:03:31 +0000   Fri, 11 Oct 2024 20:59:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:03:31 +0000   Fri, 11 Oct 2024 20:59:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:03:31 +0000   Fri, 11 Oct 2024 20:59:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:03:31 +0000   Fri, 11 Oct 2024 20:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    addons-335640
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2e1823e29c4a7d9067bd8673ac97f7
	  System UUID:                8a2e1823-e29c-4a7d-9067-bd8673ac97f7
	  Boot ID:                    6a7e73ca-006d-4953-9110-3bc1a1eac562
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     hello-world-app-55bf9c44b4-h7nfv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-thztz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m50s
	  kube-system                 amd-gpu-device-plugin-9lfb2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 coredns-7c65d6cfc9-c8225                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m58s
	  kube-system                 etcd-addons-335640                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m4s
	  kube-system                 kube-apiserver-addons-335640                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-controller-manager-addons-335640        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-proxy-pjszr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-scheduler-addons-335640                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 metrics-server-84c5f94fbc-zmj4b              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m53s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m55s  kube-proxy       
	  Normal  Starting                 6m4s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m4s   kubelet          Node addons-335640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s   kubelet          Node addons-335640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s   kubelet          Node addons-335640 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m3s   kubelet          Node addons-335640 status is now: NodeReady
	  Normal  RegisteredNode           6m     node-controller  Node addons-335640 event: Registered Node addons-335640 in Controller
	
	
	==> dmesg <==
	[  +0.085158] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.311309] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.178382] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.009019] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.039036] kauditd_printk_skb: 138 callbacks suppressed
	[  +9.120730] kauditd_printk_skb: 77 callbacks suppressed
	[Oct11 21:00] kauditd_printk_skb: 2 callbacks suppressed
	[ +23.841651] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.010961] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.213329] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.111618] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.230247] kauditd_printk_skb: 16 callbacks suppressed
	[Oct11 21:02] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.010586] kauditd_printk_skb: 9 callbacks suppressed
	[ +16.312988] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.262184] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.323065] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.606848] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.550661] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.036065] kauditd_printk_skb: 32 callbacks suppressed
	[Oct11 21:03] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.016046] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.124508] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.066527] kauditd_printk_skb: 38 callbacks suppressed
	[Oct11 21:05] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09] <==
	{"level":"warn","ts":"2024-10-11T21:00:26.924182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.198814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:26.925497Z","caller":"traceutil/trace.go:171","msg":"trace[1286683626] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:944; }","duration":"159.582372ms","start":"2024-10-11T21:00:26.765904Z","end":"2024-10-11T21:00:26.925486Z","steps":["trace[1286683626] 'agreement among raft nodes before linearized reading'  (duration: 158.182564ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:26.924238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.248308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:26.925636Z","caller":"traceutil/trace.go:171","msg":"trace[761550027] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:944; }","duration":"254.646251ms","start":"2024-10-11T21:00:26.670983Z","end":"2024-10-11T21:00:26.925630Z","steps":["trace[761550027] 'agreement among raft nodes before linearized reading'  (duration: 253.239263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:26.924259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.545454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:26.925747Z","caller":"traceutil/trace.go:171","msg":"trace[1910771772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:944; }","duration":"160.033279ms","start":"2024-10-11T21:00:26.765709Z","end":"2024-10-11T21:00:26.925742Z","steps":["trace[1910771772] 'agreement among raft nodes before linearized reading'  (duration: 158.540684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:38.282793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.145782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:38.283025Z","caller":"traceutil/trace.go:171","msg":"trace[819982128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:972; }","duration":"115.329195ms","start":"2024-10-11T21:00:38.167625Z","end":"2024-10-11T21:00:38.282954Z","steps":["trace[819982128] 'range keys from in-memory index tree'  (duration: 115.060979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T21:00:52.386712Z","caller":"traceutil/trace.go:171","msg":"trace[1290041847] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1097; }","duration":"385.180198ms","start":"2024-10-11T21:00:52.001496Z","end":"2024-10-11T21:00:52.386676Z","steps":["trace[1290041847] 'read index received'  (duration: 384.949736ms)","trace[1290041847] 'applied index is now lower than readState.Index'  (duration: 229.812µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-11T21:00:52.386887Z","caller":"traceutil/trace.go:171","msg":"trace[1758957571] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"396.382816ms","start":"2024-10-11T21:00:51.990490Z","end":"2024-10-11T21:00:52.386873Z","steps":["trace[1758957571] 'process raft request'  (duration: 396.011348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.387027Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T21:00:51.990465Z","time spent":"396.446406ms","remote":"127.0.0.1:59848","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3132,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" mod_revision:825 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > >"}
	{"level":"warn","ts":"2024-10-11T21:00:52.387219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.720059ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.387259Z","caller":"traceutil/trace.go:171","msg":"trace[1285946992] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1066; }","duration":"385.761737ms","start":"2024-10-11T21:00:52.001491Z","end":"2024-10-11T21:00:52.387253Z","steps":["trace[1285946992] 'agreement among raft nodes before linearized reading'  (duration: 385.685639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.388274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.852708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.388320Z","caller":"traceutil/trace.go:171","msg":"trace[387610853] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"301.033631ms","start":"2024-10-11T21:00:52.087278Z","end":"2024-10-11T21:00:52.388312Z","steps":["trace[387610853] 'agreement among raft nodes before linearized reading'  (duration: 300.823131ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.388774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.909212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.388967Z","caller":"traceutil/trace.go:171","msg":"trace[745617295] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"125.103791ms","start":"2024-10-11T21:00:52.263854Z","end":"2024-10-11T21:00:52.388958Z","steps":["trace[745617295] 'agreement among raft nodes before linearized reading'  (duration: 124.893165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.389698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T21:00:52.087244Z","time spent":"301.095278ms","remote":"127.0.0.1:59802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-11T21:00:52.390599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.002237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.390644Z","caller":"traceutil/trace.go:171","msg":"trace[1788118320] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"223.107299ms","start":"2024-10-11T21:00:52.167525Z","end":"2024-10-11T21:00:52.390632Z","steps":["trace[1788118320] 'agreement among raft nodes before linearized reading'  (duration: 222.982967ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.390740Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.702594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-11T21:00:52.390771Z","caller":"traceutil/trace.go:171","msg":"trace[1351951569] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1066; }","duration":"297.736282ms","start":"2024-10-11T21:00:52.093030Z","end":"2024-10-11T21:00:52.390766Z","steps":["trace[1351951569] 'agreement among raft nodes before linearized reading'  (duration: 297.687665ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T21:02:31.966992Z","caller":"traceutil/trace.go:171","msg":"trace[1706936483] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"400.269442ms","start":"2024-10-11T21:02:31.566693Z","end":"2024-10-11T21:02:31.966963Z","steps":["trace[1706936483] 'process raft request'  (duration: 399.905237ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:02:31.967299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T21:02:31.566679Z","time spent":"400.386226ms","remote":"127.0.0.1:59788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1385 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-11T21:02:50.230077Z","caller":"traceutil/trace.go:171","msg":"trace[198283386] transaction","detail":"{read_only:false; response_revision:1547; number_of_response:1; }","duration":"103.967802ms","start":"2024-10-11T21:02:50.126082Z","end":"2024-10-11T21:02:50.230050Z","steps":["trace[198283386] 'process raft request'  (duration: 103.53851ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:05:30 up 6 min,  0 users,  load average: 0.50, 0.94, 0.54
	Linux addons-335640 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42d6d5988caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed] <==
	E1011 21:01:17.984894       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.240.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.240.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.240.179:443: connect: connection refused" logger="UnhandledError"
	I1011 21:01:18.057398       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1011 21:02:17.505464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.109:8443->192.168.39.1:36014: use of closed network connection
	E1011 21:02:17.690100       1 conn.go:339] Error on socket receive: read tcp 192.168.39.109:8443->192.168.39.1:36044: use of closed network connection
	I1011 21:02:26.875269       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.37.133"}
	I1011 21:02:58.399618       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1011 21:03:02.488746       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1011 21:03:02.696293       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1011 21:03:03.733947       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1011 21:03:08.159090       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1011 21:03:08.345354       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.63.223"}
	I1011 21:03:21.424208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.424514       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.441085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.441877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.473315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.473374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.507913       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.507942       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.574277       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.574827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1011 21:03:22.507933       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1011 21:03:22.574717       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1011 21:03:22.608383       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1011 21:05:28.742645       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.233.141"}
	
	
	==> kube-controller-manager [3baf00379ca6ea029d36f772b21c41a194271522437c812390a73a49badabd64] <==
	W1011 21:04:01.209437       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:01.209514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:03.232681       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:03.232808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:18.119829       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:18.120083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:38.837931       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:38.838014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:42.091705       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:42.091763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:04:43.896000       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:04:43.896063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:05.549657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:05.549733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:26.904775       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:26.904928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1011 21:05:28.561192       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.196527ms"
	I1011 21:05:28.585524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.10267ms"
	I1011 21:05:28.585736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.174µs"
	I1011 21:05:28.585853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="68.088µs"
	I1011 21:05:28.590628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.289µs"
	W1011 21:05:29.313881       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:29.314001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:05:30.082842       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:05:30.082899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 20:59:34.225544       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 20:59:34.248046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.109"]
	E1011 20:59:34.248108       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 20:59:34.373535       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 20:59:34.373571       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 20:59:34.373592       1 server_linux.go:169] "Using iptables Proxier"
	I1011 20:59:34.385354       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 20:59:34.385687       1 server.go:483] "Version info" version="v1.31.1"
	I1011 20:59:34.385699       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 20:59:34.386894       1 config.go:199] "Starting service config controller"
	I1011 20:59:34.386909       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 20:59:34.386954       1 config.go:105] "Starting endpoint slice config controller"
	I1011 20:59:34.386960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 20:59:34.392669       1 config.go:328] "Starting node config controller"
	I1011 20:59:34.392678       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 20:59:34.487297       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 20:59:34.487334       1 shared_informer.go:320] Caches are synced for service config
	I1011 20:59:34.497208       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a] <==
	W1011 20:59:24.275497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:59:24.275623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:24.275858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:59:24.275948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.107830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 20:59:25.108285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.141780       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 20:59:25.141829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.142866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 20:59:25.142991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.223560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 20:59:25.223647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.340311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:59:25.340374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.393256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 20:59:25.393338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.393999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 20:59:25.394035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.429653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 20:59:25.429709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.482522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:59:25.482949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.545799       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 20:59:25.545857       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 20:59:28.765197       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566555    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecfec52c-a8f2-454d-8a60-688497d37e44" containerName="volume-snapshot-controller"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566607    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="935d8f6e-845b-4c20-b293-05d78c9d6470" containerName="csi-attacher"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566615    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="hostpath"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566621    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="csi-external-health-monitor-controller"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566626    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="liveness-probe"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566632    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="csi-provisioner"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566643    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9011781b-8f93-423b-bd92-d3df096f9a14" containerName="volume-snapshot-controller"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566649    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c4e1169-a2c9-4ef7-bd1d-0f34c0779f64" containerName="csi-resizer"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566655    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="node-driver-registrar"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566662    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="csi-snapshotter"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566668    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75f37b43-e15a-41da-b663-34daf95a8e16" containerName="task-pv-container"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: E1011 21:05:28.566673    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d73b9e0-25fc-42a7-ab9b-7ba6e195f101" containerName="local-path-provisioner"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566723    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="csi-external-health-monitor-controller"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566729    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="hostpath"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566738    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="csi-snapshotter"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566748    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f37b43-e15a-41da-b663-34daf95a8e16" containerName="task-pv-container"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566757    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d73b9e0-25fc-42a7-ab9b-7ba6e195f101" containerName="local-path-provisioner"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566762    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecfec52c-a8f2-454d-8a60-688497d37e44" containerName="volume-snapshot-controller"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566767    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="9011781b-8f93-423b-bd92-d3df096f9a14" containerName="volume-snapshot-controller"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566772    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="935d8f6e-845b-4c20-b293-05d78c9d6470" containerName="csi-attacher"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566781    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="csi-provisioner"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566786    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="node-driver-registrar"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566790    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c420a2-bc23-4178-a7e7-424c14f1cdee" containerName="liveness-probe"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.566800    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4e1169-a2c9-4ef7-bd1d-0f34c0779f64" containerName="csi-resizer"
	Oct 11 21:05:28 addons-335640 kubelet[1205]: I1011 21:05:28.678877    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnc8r\" (UniqueName: \"kubernetes.io/projected/aca5a6ab-8faf-455e-874f-b4f4f33445f1-kube-api-access-fnc8r\") pod \"hello-world-app-55bf9c44b4-h7nfv\" (UID: \"aca5a6ab-8faf-455e-874f-b4f4f33445f1\") " pod="default/hello-world-app-55bf9c44b4-h7nfv"
	
	
	==> storage-provisioner [f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3] <==
	I1011 20:59:38.953432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 20:59:38.969095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 20:59:38.974933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 20:59:39.214845       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 20:59:39.215082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-335640_06a0458b-e9a7-427f-8eb5-60771a3be0aa!
	I1011 20:59:39.216847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26402892-5c04-4086-86d6-b40d74399051", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-335640_06a0458b-e9a7-427f-8eb5-60771a3be0aa became leader
	I1011 20:59:39.633064       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-335640_06a0458b-e9a7-427f-8eb5-60771a3be0aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335640 -n addons-335640
helpers_test.go:261: (dbg) Run:  kubectl --context addons-335640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-h7nfv ingress-nginx-admission-create-c7d68 ingress-nginx-admission-patch-hxs7q
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-335640 describe pod hello-world-app-55bf9c44b4-h7nfv ingress-nginx-admission-create-c7d68 ingress-nginx-admission-patch-hxs7q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-335640 describe pod hello-world-app-55bf9c44b4-h7nfv ingress-nginx-admission-create-c7d68 ingress-nginx-admission-patch-hxs7q: exit status 1 (70.779559ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-h7nfv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-335640/192.168.39.109
	Start Time:       Fri, 11 Oct 2024 21:05:28 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnc8r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fnc8r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-h7nfv to addons-335640
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c7d68" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hxs7q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-335640 describe pod hello-world-app-55bf9c44b4-h7nfv ingress-nginx-admission-create-c7d68 ingress-nginx-admission-patch-hxs7q: exit status 1
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable ingress-dns --alsologtostderr -v=1: (1.247809393s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable ingress --alsologtostderr -v=1: (7.71151369s)
--- FAIL: TestAddons/parallel/Ingress (152.10s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (357.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.207967ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zmj4b" [8ec1bee3-86d5-4b1b-ba8e-96e9786005cc] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005318825s
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (106.914725ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 3m24.61687098s

                                                
                                                
** /stderr **
I1011 21:02:58.618548   18814 retry.go:31] will retry after 3.679117199s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (70.760992ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 3m28.367755963s

                                                
                                                
** /stderr **
I1011 21:03:02.369413   18814 retry.go:31] will retry after 6.471586104s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (69.217736ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 3m34.90928158s

                                                
                                                
** /stderr **
I1011 21:03:08.911361   18814 retry.go:31] will retry after 4.203019439s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (62.709487ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 3m39.175635582s

                                                
                                                
** /stderr **
I1011 21:03:13.177326   18814 retry.go:31] will retry after 10.192732339s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (65.035735ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 3m49.434405371s

                                                
                                                
** /stderr **
I1011 21:03:23.435904   18814 retry.go:31] will retry after 11.663062041s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (61.43638ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 4m1.159025915s

                                                
                                                
** /stderr **
I1011 21:03:35.160757   18814 retry.go:31] will retry after 15.16649028s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (63.581595ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 4m16.389350565s

                                                
                                                
** /stderr **
I1011 21:03:50.391166   18814 retry.go:31] will retry after 36.249351861s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (67.027906ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 4m52.707057204s

                                                
                                                
** /stderr **
I1011 21:04:26.708755   18814 retry.go:31] will retry after 1m14.789442926s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (61.979056ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 6m7.559619813s

                                                
                                                
** /stderr **
I1011 21:05:41.561544   18814 retry.go:31] will retry after 53.085947477s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (61.136521ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 7m0.710238078s

                                                
                                                
** /stderr **
I1011 21:06:34.712024   18814 retry.go:31] will retry after 49.641125871s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (64.649058ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 7m50.42014237s

                                                
                                                
** /stderr **
I1011 21:07:24.422034   18814 retry.go:31] will retry after 48.682698792s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (63.389662ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 8m39.168836652s

                                                
                                                
** /stderr **
I1011 21:08:13.170991   18814 retry.go:31] will retry after 34.244529524s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-335640 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-335640 top pods -n kube-system: exit status 1 (62.573165ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9lfb2, age: 9m13.477254727s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-335640 -n addons-335640
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 logs -n 25: (1.28658592s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-404031                                                                     | download-only-404031 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-873204                                                                     | download-only-873204 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-999700 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | binary-mirror-999700                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33833                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-999700                                                                     | binary-mirror-999700 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| addons  | enable dashboard -p                                                                         | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-335640                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-335640                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-335640 --wait=true                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 21:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | -p addons-335640                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-335640 ip                                                                            | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-335640 ssh cat                                                                       | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | /opt/local-path-provisioner/pvc-5e03d062-901b-4d87-ab60-2b2a39b9acde_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:02 UTC | 11 Oct 24 21:02 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-335640 ssh curl -s                                                                   | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335640 addons                                                                        | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:03 UTC | 11 Oct 24 21:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-335640 ip                                                                            | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:05 UTC | 11 Oct 24 21:05 UTC |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:05 UTC | 11 Oct 24 21:05 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-335640 addons disable                                                                | addons-335640        | jenkins | v1.34.0 | 11 Oct 24 21:05 UTC | 11 Oct 24 21:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:45.095805   19546 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:45.095917   19546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:45.095925   19546 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:45.095928   19546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:45.096096   19546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 20:58:45.096652   19546 out.go:352] Setting JSON to false
	I1011 20:58:45.097400   19546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2470,"bootTime":1728677855,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 20:58:45.097493   19546 start.go:139] virtualization: kvm guest
	I1011 20:58:45.099538   19546 out.go:177] * [addons-335640] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 20:58:45.100872   19546 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 20:58:45.100898   19546 notify.go:220] Checking for updates...
	I1011 20:58:45.103001   19546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:45.104033   19546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 20:58:45.104984   19546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:45.105950   19546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 20:58:45.106936   19546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 20:58:45.108109   19546 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:45.138356   19546 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 20:58:45.139595   19546 start.go:297] selected driver: kvm2
	I1011 20:58:45.139608   19546 start.go:901] validating driver "kvm2" against <nil>
	I1011 20:58:45.139618   19546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 20:58:45.140244   19546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:45.140318   19546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 20:58:45.154523   19546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 20:58:45.154568   19546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:45.154799   19546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:58:45.154828   19546 cni.go:84] Creating CNI manager for ""
	I1011 20:58:45.154869   19546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:58:45.154876   19546 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:45.154921   19546 start.go:340] cluster config:
	{Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:45.155002   19546 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:45.156547   19546 out.go:177] * Starting "addons-335640" primary control-plane node in "addons-335640" cluster
	I1011 20:58:45.157626   19546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:45.157659   19546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 20:58:45.157669   19546 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:45.157748   19546 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 20:58:45.157759   19546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 20:58:45.158043   19546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/config.json ...
	I1011 20:58:45.158061   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/config.json: {Name:mkcc4401e0bfd13d7ad41ac79776709e9b972584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:45.158192   19546 start.go:360] acquireMachinesLock for addons-335640: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 20:58:45.158253   19546 start.go:364] duration metric: took 44.382µs to acquireMachinesLock for "addons-335640"
	I1011 20:58:45.158274   19546 start.go:93] Provisioning new machine with config: &{Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:58:45.158344   19546 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 20:58:45.159908   19546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1011 20:58:45.160045   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:58:45.160077   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:58:45.173387   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37491
	I1011 20:58:45.173826   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:58:45.174334   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:58:45.174352   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:58:45.174721   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:58:45.174887   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:58:45.175039   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:58:45.175176   19546 start.go:159] libmachine.API.Create for "addons-335640" (driver="kvm2")
	I1011 20:58:45.175205   19546 client.go:168] LocalClient.Create starting
	I1011 20:58:45.175244   19546 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 20:58:45.411712   19546 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 20:58:45.650822   19546 main.go:141] libmachine: Running pre-create checks...
	I1011 20:58:45.650847   19546 main.go:141] libmachine: (addons-335640) Calling .PreCreateCheck
	I1011 20:58:45.651347   19546 main.go:141] libmachine: (addons-335640) Calling .GetConfigRaw
	I1011 20:58:45.651829   19546 main.go:141] libmachine: Creating machine...
	I1011 20:58:45.651851   19546 main.go:141] libmachine: (addons-335640) Calling .Create
	I1011 20:58:45.652070   19546 main.go:141] libmachine: (addons-335640) Creating KVM machine...
	I1011 20:58:45.653549   19546 main.go:141] libmachine: (addons-335640) DBG | found existing default KVM network
	I1011 20:58:45.654309   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:45.654156   19568 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011b1f0}
	I1011 20:58:45.654332   19546 main.go:141] libmachine: (addons-335640) DBG | created network xml: 
	I1011 20:58:45.654344   19546 main.go:141] libmachine: (addons-335640) DBG | <network>
	I1011 20:58:45.654358   19546 main.go:141] libmachine: (addons-335640) DBG |   <name>mk-addons-335640</name>
	I1011 20:58:45.654412   19546 main.go:141] libmachine: (addons-335640) DBG |   <dns enable='no'/>
	I1011 20:58:45.654444   19546 main.go:141] libmachine: (addons-335640) DBG |   
	I1011 20:58:45.654456   19546 main.go:141] libmachine: (addons-335640) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1011 20:58:45.654468   19546 main.go:141] libmachine: (addons-335640) DBG |     <dhcp>
	I1011 20:58:45.654478   19546 main.go:141] libmachine: (addons-335640) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1011 20:58:45.654486   19546 main.go:141] libmachine: (addons-335640) DBG |     </dhcp>
	I1011 20:58:45.654496   19546 main.go:141] libmachine: (addons-335640) DBG |   </ip>
	I1011 20:58:45.654502   19546 main.go:141] libmachine: (addons-335640) DBG |   
	I1011 20:58:45.654514   19546 main.go:141] libmachine: (addons-335640) DBG | </network>
	I1011 20:58:45.654524   19546 main.go:141] libmachine: (addons-335640) DBG | 
	I1011 20:58:45.659531   19546 main.go:141] libmachine: (addons-335640) DBG | trying to create private KVM network mk-addons-335640 192.168.39.0/24...
	I1011 20:58:45.723458   19546 main.go:141] libmachine: (addons-335640) DBG | private KVM network mk-addons-335640 192.168.39.0/24 created
	I1011 20:58:45.723521   19546 main.go:141] libmachine: (addons-335640) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640 ...
	I1011 20:58:45.723555   19546 main.go:141] libmachine: (addons-335640) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 20:58:45.723572   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:45.723443   19568 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:45.723604   19546 main.go:141] libmachine: (addons-335640) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 20:58:45.992379   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:45.992252   19568 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa...
	I1011 20:58:46.322463   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:46.322359   19568 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/addons-335640.rawdisk...
	I1011 20:58:46.322501   19546 main.go:141] libmachine: (addons-335640) DBG | Writing magic tar header
	I1011 20:58:46.322522   19546 main.go:141] libmachine: (addons-335640) DBG | Writing SSH key tar header
	I1011 20:58:46.322540   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:46.322454   19568 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640 ...
	I1011 20:58:46.322561   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640
	I1011 20:58:46.322574   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 20:58:46.322581   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640 (perms=drwx------)
	I1011 20:58:46.322593   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 20:58:46.322603   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 20:58:46.322628   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 20:58:46.322645   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 20:58:46.322653   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:46.322665   19546 main.go:141] libmachine: (addons-335640) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 20:58:46.322681   19546 main.go:141] libmachine: (addons-335640) Creating domain...
	I1011 20:58:46.322694   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 20:58:46.322709   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 20:58:46.322720   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home/jenkins
	I1011 20:58:46.322733   19546 main.go:141] libmachine: (addons-335640) DBG | Checking permissions on dir: /home
	I1011 20:58:46.322744   19546 main.go:141] libmachine: (addons-335640) DBG | Skipping /home - not owner
	I1011 20:58:46.323645   19546 main.go:141] libmachine: (addons-335640) define libvirt domain using xml: 
	I1011 20:58:46.323672   19546 main.go:141] libmachine: (addons-335640) <domain type='kvm'>
	I1011 20:58:46.323689   19546 main.go:141] libmachine: (addons-335640)   <name>addons-335640</name>
	I1011 20:58:46.323716   19546 main.go:141] libmachine: (addons-335640)   <memory unit='MiB'>4000</memory>
	I1011 20:58:46.323722   19546 main.go:141] libmachine: (addons-335640)   <vcpu>2</vcpu>
	I1011 20:58:46.323729   19546 main.go:141] libmachine: (addons-335640)   <features>
	I1011 20:58:46.323735   19546 main.go:141] libmachine: (addons-335640)     <acpi/>
	I1011 20:58:46.323739   19546 main.go:141] libmachine: (addons-335640)     <apic/>
	I1011 20:58:46.323744   19546 main.go:141] libmachine: (addons-335640)     <pae/>
	I1011 20:58:46.323748   19546 main.go:141] libmachine: (addons-335640)     
	I1011 20:58:46.323777   19546 main.go:141] libmachine: (addons-335640)   </features>
	I1011 20:58:46.323796   19546 main.go:141] libmachine: (addons-335640)   <cpu mode='host-passthrough'>
	I1011 20:58:46.323802   19546 main.go:141] libmachine: (addons-335640)   
	I1011 20:58:46.323809   19546 main.go:141] libmachine: (addons-335640)   </cpu>
	I1011 20:58:46.323815   19546 main.go:141] libmachine: (addons-335640)   <os>
	I1011 20:58:46.323819   19546 main.go:141] libmachine: (addons-335640)     <type>hvm</type>
	I1011 20:58:46.323830   19546 main.go:141] libmachine: (addons-335640)     <boot dev='cdrom'/>
	I1011 20:58:46.323841   19546 main.go:141] libmachine: (addons-335640)     <boot dev='hd'/>
	I1011 20:58:46.323855   19546 main.go:141] libmachine: (addons-335640)     <bootmenu enable='no'/>
	I1011 20:58:46.323861   19546 main.go:141] libmachine: (addons-335640)   </os>
	I1011 20:58:46.323868   19546 main.go:141] libmachine: (addons-335640)   <devices>
	I1011 20:58:46.323874   19546 main.go:141] libmachine: (addons-335640)     <disk type='file' device='cdrom'>
	I1011 20:58:46.323881   19546 main.go:141] libmachine: (addons-335640)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/boot2docker.iso'/>
	I1011 20:58:46.323886   19546 main.go:141] libmachine: (addons-335640)       <target dev='hdc' bus='scsi'/>
	I1011 20:58:46.323890   19546 main.go:141] libmachine: (addons-335640)       <readonly/>
	I1011 20:58:46.323895   19546 main.go:141] libmachine: (addons-335640)     </disk>
	I1011 20:58:46.323903   19546 main.go:141] libmachine: (addons-335640)     <disk type='file' device='disk'>
	I1011 20:58:46.323912   19546 main.go:141] libmachine: (addons-335640)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 20:58:46.323928   19546 main.go:141] libmachine: (addons-335640)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/addons-335640.rawdisk'/>
	I1011 20:58:46.323942   19546 main.go:141] libmachine: (addons-335640)       <target dev='hda' bus='virtio'/>
	I1011 20:58:46.323953   19546 main.go:141] libmachine: (addons-335640)     </disk>
	I1011 20:58:46.323960   19546 main.go:141] libmachine: (addons-335640)     <interface type='network'>
	I1011 20:58:46.323972   19546 main.go:141] libmachine: (addons-335640)       <source network='mk-addons-335640'/>
	I1011 20:58:46.323979   19546 main.go:141] libmachine: (addons-335640)       <model type='virtio'/>
	I1011 20:58:46.323986   19546 main.go:141] libmachine: (addons-335640)     </interface>
	I1011 20:58:46.323990   19546 main.go:141] libmachine: (addons-335640)     <interface type='network'>
	I1011 20:58:46.323998   19546 main.go:141] libmachine: (addons-335640)       <source network='default'/>
	I1011 20:58:46.324002   19546 main.go:141] libmachine: (addons-335640)       <model type='virtio'/>
	I1011 20:58:46.324011   19546 main.go:141] libmachine: (addons-335640)     </interface>
	I1011 20:58:46.324023   19546 main.go:141] libmachine: (addons-335640)     <serial type='pty'>
	I1011 20:58:46.324031   19546 main.go:141] libmachine: (addons-335640)       <target port='0'/>
	I1011 20:58:46.324043   19546 main.go:141] libmachine: (addons-335640)     </serial>
	I1011 20:58:46.324052   19546 main.go:141] libmachine: (addons-335640)     <console type='pty'>
	I1011 20:58:46.324067   19546 main.go:141] libmachine: (addons-335640)       <target type='serial' port='0'/>
	I1011 20:58:46.324080   19546 main.go:141] libmachine: (addons-335640)     </console>
	I1011 20:58:46.324088   19546 main.go:141] libmachine: (addons-335640)     <rng model='virtio'>
	I1011 20:58:46.324093   19546 main.go:141] libmachine: (addons-335640)       <backend model='random'>/dev/random</backend>
	I1011 20:58:46.324100   19546 main.go:141] libmachine: (addons-335640)     </rng>
	I1011 20:58:46.324103   19546 main.go:141] libmachine: (addons-335640)     
	I1011 20:58:46.324110   19546 main.go:141] libmachine: (addons-335640)     
	I1011 20:58:46.324114   19546 main.go:141] libmachine: (addons-335640)   </devices>
	I1011 20:58:46.324118   19546 main.go:141] libmachine: (addons-335640) </domain>
	I1011 20:58:46.324124   19546 main.go:141] libmachine: (addons-335640) 
	I1011 20:58:46.382034   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:1f:29:89 in network default
	I1011 20:58:46.382570   19546 main.go:141] libmachine: (addons-335640) Ensuring networks are active...
	I1011 20:58:46.382590   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:46.383273   19546 main.go:141] libmachine: (addons-335640) Ensuring network default is active
	I1011 20:58:46.383559   19546 main.go:141] libmachine: (addons-335640) Ensuring network mk-addons-335640 is active
	I1011 20:58:46.383992   19546 main.go:141] libmachine: (addons-335640) Getting domain xml...
	I1011 20:58:46.384580   19546 main.go:141] libmachine: (addons-335640) Creating domain...
	I1011 20:58:47.927204   19546 main.go:141] libmachine: (addons-335640) Waiting to get IP...
	I1011 20:58:47.928068   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:47.928549   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:47.928578   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:47.928535   19568 retry.go:31] will retry after 254.276274ms: waiting for machine to come up
	I1011 20:58:48.184671   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:48.185021   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:48.185048   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:48.184978   19568 retry.go:31] will retry after 249.718028ms: waiting for machine to come up
	I1011 20:58:48.436506   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:48.436904   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:48.436932   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:48.436858   19568 retry.go:31] will retry after 468.619344ms: waiting for machine to come up
	I1011 20:58:48.907487   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:48.907879   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:48.907908   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:48.907844   19568 retry.go:31] will retry after 547.218559ms: waiting for machine to come up
	I1011 20:58:49.456565   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:49.457038   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:49.457059   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:49.456997   19568 retry.go:31] will retry after 666.004256ms: waiting for machine to come up
	I1011 20:58:50.124650   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:50.125033   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:50.125053   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:50.124995   19568 retry.go:31] will retry after 844.774679ms: waiting for machine to come up
	I1011 20:58:50.971169   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:50.971566   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:50.971586   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:50.971530   19568 retry.go:31] will retry after 772.181307ms: waiting for machine to come up
	I1011 20:58:51.745330   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:51.745746   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:51.745772   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:51.745704   19568 retry.go:31] will retry after 1.038747096s: waiting for machine to come up
	I1011 20:58:52.785748   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:52.786175   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:52.786211   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:52.786142   19568 retry.go:31] will retry after 1.304891238s: waiting for machine to come up
	I1011 20:58:54.092429   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:54.092819   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:54.092845   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:54.092778   19568 retry.go:31] will retry after 1.637422366s: waiting for machine to come up
	I1011 20:58:55.731521   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:55.731925   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:55.731948   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:55.731891   19568 retry.go:31] will retry after 2.869520339s: waiting for machine to come up
	I1011 20:58:58.605028   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:58:58.605487   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:58:58.605508   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:58:58.605454   19568 retry.go:31] will retry after 3.228381586s: waiting for machine to come up
	I1011 20:59:01.836051   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:01.836450   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:59:01.836471   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:59:01.836402   19568 retry.go:31] will retry after 3.104216969s: waiting for machine to come up
	I1011 20:59:04.944517   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:04.944993   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find current IP address of domain addons-335640 in network mk-addons-335640
	I1011 20:59:04.945017   19546 main.go:141] libmachine: (addons-335640) DBG | I1011 20:59:04.944941   19568 retry.go:31] will retry after 4.185077738s: waiting for machine to come up
	I1011 20:59:09.134077   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.134501   19546 main.go:141] libmachine: (addons-335640) Found IP for machine: 192.168.39.109
	I1011 20:59:09.134524   19546 main.go:141] libmachine: (addons-335640) Reserving static IP address...
	I1011 20:59:09.134536   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has current primary IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.134940   19546 main.go:141] libmachine: (addons-335640) DBG | unable to find host DHCP lease matching {name: "addons-335640", mac: "52:54:00:8b:e5:d7", ip: "192.168.39.109"} in network mk-addons-335640
	I1011 20:59:09.201559   19546 main.go:141] libmachine: (addons-335640) DBG | Getting to WaitForSSH function...
	I1011 20:59:09.201589   19546 main.go:141] libmachine: (addons-335640) Reserved static IP address: 192.168.39.109
	I1011 20:59:09.201602   19546 main.go:141] libmachine: (addons-335640) Waiting for SSH to be available...
	I1011 20:59:09.204242   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.204691   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.204718   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.204885   19546 main.go:141] libmachine: (addons-335640) DBG | Using SSH client type: external
	I1011 20:59:09.204896   19546 main.go:141] libmachine: (addons-335640) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa (-rw-------)
	I1011 20:59:09.204916   19546 main.go:141] libmachine: (addons-335640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 20:59:09.204924   19546 main.go:141] libmachine: (addons-335640) DBG | About to run SSH command:
	I1011 20:59:09.204931   19546 main.go:141] libmachine: (addons-335640) DBG | exit 0
	I1011 20:59:09.338941   19546 main.go:141] libmachine: (addons-335640) DBG | SSH cmd err, output: <nil>: 
	I1011 20:59:09.339300   19546 main.go:141] libmachine: (addons-335640) KVM machine creation complete!
	I1011 20:59:09.339654   19546 main.go:141] libmachine: (addons-335640) Calling .GetConfigRaw
	I1011 20:59:09.340181   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:09.340434   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:09.340624   19546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 20:59:09.340646   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:09.341805   19546 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 20:59:09.341820   19546 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 20:59:09.341825   19546 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 20:59:09.341830   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.343973   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.344310   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.344339   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.344464   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.344603   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.344724   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.344806   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.344906   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.345082   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.345093   19546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 20:59:09.453911   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:59:09.453930   19546 main.go:141] libmachine: Detecting the provisioner...
	I1011 20:59:09.453938   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.456692   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.457185   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.457226   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.457437   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.457668   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.457850   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.457971   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.458136   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.458308   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.458321   19546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 20:59:09.567372   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 20:59:09.567432   19546 main.go:141] libmachine: found compatible host: buildroot
	I1011 20:59:09.567438   19546 main.go:141] libmachine: Provisioning with buildroot...
	I1011 20:59:09.567451   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:59:09.567692   19546 buildroot.go:166] provisioning hostname "addons-335640"
	I1011 20:59:09.567717   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:59:09.567890   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.570834   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.571151   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.571176   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.571283   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.571470   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.571658   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.571816   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.571983   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.572236   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.572253   19546 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-335640 && echo "addons-335640" | sudo tee /etc/hostname
	I1011 20:59:09.697442   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-335640
	
	I1011 20:59:09.697472   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.700221   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.700588   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.700625   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.700777   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:09.700958   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.701092   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:09.701194   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:09.701320   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:09.701526   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:09.701550   19546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-335640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-335640/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-335640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 20:59:09.819460   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:59:09.819495   19546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 20:59:09.819545   19546 buildroot.go:174] setting up certificates
	I1011 20:59:09.819563   19546 provision.go:84] configureAuth start
	I1011 20:59:09.819582   19546 main.go:141] libmachine: (addons-335640) Calling .GetMachineName
	I1011 20:59:09.819854   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:09.822188   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.822458   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.822482   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.822593   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:09.824937   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.825235   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:09.825262   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:09.825372   19546 provision.go:143] copyHostCerts
	I1011 20:59:09.825461   19546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 20:59:09.825660   19546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 20:59:09.825762   19546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 20:59:09.825841   19546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.addons-335640 san=[127.0.0.1 192.168.39.109 addons-335640 localhost minikube]
	I1011 20:59:10.017292   19546 provision.go:177] copyRemoteCerts
	I1011 20:59:10.017349   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 20:59:10.017371   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.019883   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.020386   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.020424   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.020594   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.020750   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.020860   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.020969   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.106005   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 20:59:10.131409   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 20:59:10.154440   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 20:59:10.177247   19546 provision.go:87] duration metric: took 357.667235ms to configureAuth
	I1011 20:59:10.177274   19546 buildroot.go:189] setting minikube options for container-runtime
	I1011 20:59:10.177447   19546 config.go:182] Loaded profile config "addons-335640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:59:10.177516   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.180373   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.180727   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.180759   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.180941   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.181128   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.181286   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.181407   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.181578   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:10.181775   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:10.181795   19546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 20:59:10.401715   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 20:59:10.401739   19546 main.go:141] libmachine: Checking connection to Docker...
	I1011 20:59:10.401748   19546 main.go:141] libmachine: (addons-335640) Calling .GetURL
	I1011 20:59:10.403011   19546 main.go:141] libmachine: (addons-335640) DBG | Using libvirt version 6000000
	I1011 20:59:10.405132   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.405390   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.405413   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.405544   19546 main.go:141] libmachine: Docker is up and running!
	I1011 20:59:10.405557   19546 main.go:141] libmachine: Reticulating splines...
	I1011 20:59:10.405565   19546 client.go:171] duration metric: took 25.230349012s to LocalClient.Create
	I1011 20:59:10.405592   19546 start.go:167] duration metric: took 25.230416192s to libmachine.API.Create "addons-335640"
	I1011 20:59:10.405605   19546 start.go:293] postStartSetup for "addons-335640" (driver="kvm2")
	I1011 20:59:10.405624   19546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 20:59:10.405647   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.405883   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 20:59:10.405911   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.407980   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.408276   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.408302   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.408391   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.408569   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.408709   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.408856   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.492466   19546 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 20:59:10.496604   19546 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 20:59:10.496631   19546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 20:59:10.496698   19546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 20:59:10.496721   19546 start.go:296] duration metric: took 91.104646ms for postStartSetup
	I1011 20:59:10.496748   19546 main.go:141] libmachine: (addons-335640) Calling .GetConfigRaw
	I1011 20:59:10.497246   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:10.499792   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.500125   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.500153   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.500384   19546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/config.json ...
	I1011 20:59:10.500580   19546 start.go:128] duration metric: took 25.342225257s to createHost
	I1011 20:59:10.500603   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.502965   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.503275   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.503295   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.503439   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.503618   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.503806   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.503941   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.504097   19546 main.go:141] libmachine: Using SSH client type: native
	I1011 20:59:10.504247   19546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1011 20:59:10.504257   19546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 20:59:10.615108   19546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728680350.590549237
	
	I1011 20:59:10.615131   19546 fix.go:216] guest clock: 1728680350.590549237
	I1011 20:59:10.615140   19546 fix.go:229] Guest: 2024-10-11 20:59:10.590549237 +0000 UTC Remote: 2024-10-11 20:59:10.500593928 +0000 UTC m=+25.440663918 (delta=89.955309ms)
	I1011 20:59:10.615164   19546 fix.go:200] guest clock delta is within tolerance: 89.955309ms
	I1011 20:59:10.615171   19546 start.go:83] releasing machines lock for "addons-335640", held for 25.456906139s
	I1011 20:59:10.615211   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.615455   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:10.617866   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.618186   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.618211   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.618359   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.618786   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.618947   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:10.619036   19546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 20:59:10.619085   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.619121   19546 ssh_runner.go:195] Run: cat /version.json
	I1011 20:59:10.619139   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:10.621546   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.621725   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.621966   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.621990   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.622066   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:10.622091   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:10.622104   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.622288   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:10.622290   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.622482   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.622491   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:10.622609   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:10.622641   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.622748   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:10.721293   19546 ssh_runner.go:195] Run: systemctl --version
	I1011 20:59:10.726845   19546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 20:59:10.880943   19546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 20:59:10.887396   19546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 20:59:10.887452   19546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:59:10.903262   19546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 20:59:10.903284   19546 start.go:495] detecting cgroup driver to use...
	I1011 20:59:10.903341   19546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 20:59:10.919240   19546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 20:59:10.932570   19546 docker.go:217] disabling cri-docker service (if available) ...
	I1011 20:59:10.932611   19546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 20:59:10.945530   19546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 20:59:10.958778   19546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 20:59:11.070368   19546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 20:59:11.209447   19546 docker.go:233] disabling docker service ...
	I1011 20:59:11.209531   19546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 20:59:11.227976   19546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 20:59:11.240967   19546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 20:59:11.369226   19546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 20:59:11.478432   19546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 20:59:11.492048   19546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 20:59:11.510159   19546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 20:59:11.510221   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.519862   19546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 20:59:11.519918   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.529783   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.539335   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.549111   19546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 20:59:11.558765   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.568749   19546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.585810   19546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 20:59:11.595932   19546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 20:59:11.605399   19546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 20:59:11.605436   19546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 20:59:11.617207   19546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 20:59:11.626271   19546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:59:11.729923   19546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 20:59:11.815515   19546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 20:59:11.815619   19546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 20:59:11.820243   19546 start.go:563] Will wait 60s for crictl version
	I1011 20:59:11.820303   19546 ssh_runner.go:195] Run: which crictl
	I1011 20:59:11.823957   19546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 20:59:11.859903   19546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 20:59:11.860025   19546 ssh_runner.go:195] Run: crio --version
	I1011 20:59:11.886169   19546 ssh_runner.go:195] Run: crio --version
	I1011 20:59:11.920120   19546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 20:59:11.921440   19546 main.go:141] libmachine: (addons-335640) Calling .GetIP
	I1011 20:59:11.924313   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:11.924611   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:11.924641   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:11.924852   19546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 20:59:11.929004   19546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:59:11.943020   19546 kubeadm.go:883] updating cluster {Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 20:59:11.943108   19546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:59:11.943147   19546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:59:11.977761   19546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 20:59:11.977817   19546 ssh_runner.go:195] Run: which lz4
	I1011 20:59:11.981746   19546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 20:59:11.985848   19546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 20:59:11.985876   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 20:59:13.241545   19546 crio.go:462] duration metric: took 1.259823876s to copy over tarball
	I1011 20:59:13.241631   19546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 20:59:15.322988   19546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.081328818s)
	I1011 20:59:15.323014   19546 crio.go:469] duration metric: took 2.081436779s to extract the tarball
	I1011 20:59:15.323020   19546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 20:59:15.359316   19546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:59:15.398629   19546 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 20:59:15.398657   19546 cache_images.go:84] Images are preloaded, skipping loading
	I1011 20:59:15.398668   19546 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.31.1 crio true true} ...
	I1011 20:59:15.398762   19546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-335640 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 20:59:15.398825   19546 ssh_runner.go:195] Run: crio config
	I1011 20:59:15.440683   19546 cni.go:84] Creating CNI manager for ""
	I1011 20:59:15.440704   19546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:59:15.440715   19546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 20:59:15.440736   19546 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-335640 NodeName:addons-335640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 20:59:15.440889   19546 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-335640"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 20:59:15.440951   19546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 20:59:15.451534   19546 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 20:59:15.451588   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 20:59:15.461511   19546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1011 20:59:15.479080   19546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 20:59:15.494746   19546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1011 20:59:15.510069   19546 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I1011 20:59:15.513532   19546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:59:15.524827   19546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:59:15.640530   19546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:59:15.656632   19546 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640 for IP: 192.168.39.109
	I1011 20:59:15.656656   19546 certs.go:194] generating shared ca certs ...
	I1011 20:59:15.656675   19546 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.656833   19546 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 20:59:15.750119   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt ...
	I1011 20:59:15.750145   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt: {Name:mk59e4c1dd20a57ddfdecdead44a6c371bcde09f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.750305   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key ...
	I1011 20:59:15.750315   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key: {Name:mkd5a8efca580bc196234d3996e36d59c7b10106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.750378   19546 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 20:59:15.899980   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt ...
	I1011 20:59:15.900005   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt: {Name:mkaf5d3d9a411319b7249c0cf53803531482c9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.900145   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key ...
	I1011 20:59:15.900154   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key: {Name:mkcec5e8de07126d8bd86589cc4b12e25aacbb98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:15.900219   19546 certs.go:256] generating profile certs ...
	I1011 20:59:15.900281   19546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.key
	I1011 20:59:15.900295   19546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt with IP's: []
	I1011 20:59:16.242464   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt ...
	I1011 20:59:16.242496   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: {Name:mkb1894ea5e6639a50eda6724b826de9b1c4351f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.242695   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.key ...
	I1011 20:59:16.242711   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.key: {Name:mkad705e8f90bda02c5fdd44b787aef0c0e96380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.242816   19546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21
	I1011 20:59:16.242841   19546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109]
	I1011 20:59:16.391701   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21 ...
	I1011 20:59:16.391731   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21: {Name:mk996a223a3c5b4f3388013a4020ebd8365a247d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.391906   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21 ...
	I1011 20:59:16.391922   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21: {Name:mkfa1ce2ca00815018180f9fccdbfe365ed06a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.392014   19546 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt.c5167f21 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt
	I1011 20:59:16.392107   19546 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key.c5167f21 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key
	I1011 20:59:16.392184   19546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key
	I1011 20:59:16.392221   19546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt with IP's: []
	I1011 20:59:16.548763   19546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt ...
	I1011 20:59:16.548793   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt: {Name:mk072dae23020d365c8024519557199cd3978574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.548969   19546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key ...
	I1011 20:59:16.548983   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key: {Name:mk957d5f5556f37c9c09f52acb478d5bd144d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:16.549173   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 20:59:16.549222   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 20:59:16.549261   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 20:59:16.549295   19546 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 20:59:16.549879   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 20:59:16.576634   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 20:59:16.598104   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 20:59:16.630239   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 20:59:16.652867   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1011 20:59:16.675176   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 20:59:16.697252   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 20:59:16.719557   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 20:59:16.741882   19546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 20:59:16.763713   19546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 20:59:16.778887   19546 ssh_runner.go:195] Run: openssl version
	I1011 20:59:16.784328   19546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 20:59:16.793997   19546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:59:16.798109   19546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:59:16.798159   19546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:59:16.803654   19546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 20:59:16.813332   19546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 20:59:16.817034   19546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 20:59:16.817076   19546 kubeadm.go:392] StartCluster: {Name:addons-335640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-335640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:59:16.817201   19546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 20:59:16.817231   19546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 20:59:16.854629   19546 cri.go:89] found id: ""
	I1011 20:59:16.854686   19546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 20:59:16.863627   19546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 20:59:16.875687   19546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 20:59:16.886486   19546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 20:59:16.886503   19546 kubeadm.go:157] found existing configuration files:
	
	I1011 20:59:16.886538   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 20:59:16.895457   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 20:59:16.895514   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 20:59:16.904355   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 20:59:16.914064   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 20:59:16.914128   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 20:59:16.923397   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 20:59:16.932185   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 20:59:16.932244   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 20:59:16.940959   19546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 20:59:16.949445   19546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 20:59:16.949485   19546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 20:59:16.958007   19546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 20:59:17.007883   19546 kubeadm.go:310] W1011 20:59:16.990457     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:59:17.009109   19546 kubeadm.go:310] W1011 20:59:16.991989     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:59:17.120320   19546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 20:59:27.374404   19546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 20:59:27.374474   19546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 20:59:27.374553   19546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 20:59:27.374680   19546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 20:59:27.374818   19546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 20:59:27.374875   19546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 20:59:27.376394   19546 out.go:235]   - Generating certificates and keys ...
	I1011 20:59:27.376476   19546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 20:59:27.376530   19546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 20:59:27.376589   19546 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 20:59:27.376639   19546 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 20:59:27.376691   19546 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 20:59:27.376733   19546 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 20:59:27.376778   19546 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 20:59:27.376897   19546 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-335640 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I1011 20:59:27.376944   19546 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 20:59:27.377044   19546 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-335640 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I1011 20:59:27.377143   19546 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 20:59:27.377257   19546 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 20:59:27.377329   19546 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 20:59:27.377403   19546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 20:59:27.377488   19546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 20:59:27.377583   19546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 20:59:27.377664   19546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 20:59:27.377742   19546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 20:59:27.377814   19546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 20:59:27.377915   19546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 20:59:27.378005   19546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 20:59:27.379746   19546 out.go:235]   - Booting up control plane ...
	I1011 20:59:27.379834   19546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 20:59:27.379916   19546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 20:59:27.380005   19546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 20:59:27.380149   19546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 20:59:27.380274   19546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 20:59:27.380332   19546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 20:59:27.380484   19546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 20:59:27.380573   19546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 20:59:27.380625   19546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002127902s
	I1011 20:59:27.380684   19546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 20:59:27.380732   19546 kubeadm.go:310] [api-check] The API server is healthy after 5.001807492s
	I1011 20:59:27.380836   19546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 20:59:27.380978   19546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 20:59:27.381043   19546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 20:59:27.381228   19546 kubeadm.go:310] [mark-control-plane] Marking the node addons-335640 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 20:59:27.381273   19546 kubeadm.go:310] [bootstrap-token] Using token: fr560h.qqb2i4guniq1cfyk
	I1011 20:59:27.382828   19546 out.go:235]   - Configuring RBAC rules ...
	I1011 20:59:27.382946   19546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 20:59:27.383035   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 20:59:27.383198   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 20:59:27.383353   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 20:59:27.383477   19546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 20:59:27.383562   19546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 20:59:27.383691   19546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 20:59:27.383761   19546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 20:59:27.383809   19546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 20:59:27.383815   19546 kubeadm.go:310] 
	I1011 20:59:27.383860   19546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 20:59:27.383869   19546 kubeadm.go:310] 
	I1011 20:59:27.383935   19546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 20:59:27.383944   19546 kubeadm.go:310] 
	I1011 20:59:27.383968   19546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 20:59:27.384017   19546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 20:59:27.384057   19546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 20:59:27.384062   19546 kubeadm.go:310] 
	I1011 20:59:27.384109   19546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 20:59:27.384114   19546 kubeadm.go:310] 
	I1011 20:59:27.384149   19546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 20:59:27.384155   19546 kubeadm.go:310] 
	I1011 20:59:27.384200   19546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 20:59:27.384276   19546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 20:59:27.384364   19546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 20:59:27.384380   19546 kubeadm.go:310] 
	I1011 20:59:27.384482   19546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 20:59:27.384587   19546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 20:59:27.384597   19546 kubeadm.go:310] 
	I1011 20:59:27.384697   19546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fr560h.qqb2i4guniq1cfyk \
	I1011 20:59:27.384807   19546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 20:59:27.384828   19546 kubeadm.go:310] 	--control-plane 
	I1011 20:59:27.384832   19546 kubeadm.go:310] 
	I1011 20:59:27.384900   19546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 20:59:27.384906   19546 kubeadm.go:310] 
	I1011 20:59:27.384972   19546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fr560h.qqb2i4guniq1cfyk \
	I1011 20:59:27.385068   19546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 20:59:27.385078   19546 cni.go:84] Creating CNI manager for ""
	I1011 20:59:27.385084   19546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:59:27.386640   19546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 20:59:27.387711   19546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 20:59:27.398468   19546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 20:59:27.418036   19546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 20:59:27.418131   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:27.418136   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-335640 minikube.k8s.io/updated_at=2024_10_11T20_59_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=addons-335640 minikube.k8s.io/primary=true
	I1011 20:59:27.455258   19546 ops.go:34] apiserver oom_adj: -16
	I1011 20:59:27.563867   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:28.064669   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:28.563956   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:29.064719   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:29.564686   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:30.064046   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:30.564700   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:31.064701   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:31.564540   19546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:59:31.688311   19546 kubeadm.go:1113] duration metric: took 4.270258923s to wait for elevateKubeSystemPrivileges
	I1011 20:59:31.688354   19546 kubeadm.go:394] duration metric: took 14.871281082s to StartCluster
	I1011 20:59:31.688377   19546 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:31.688512   19546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 20:59:31.688967   19546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:59:31.689144   19546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 20:59:31.689154   19546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 20:59:31.689214   19546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1011 20:59:31.689330   19546 addons.go:69] Setting yakd=true in profile "addons-335640"
	I1011 20:59:31.689345   19546 addons.go:69] Setting inspektor-gadget=true in profile "addons-335640"
	I1011 20:59:31.689356   19546 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-335640"
	I1011 20:59:31.689360   19546 config.go:182] Loaded profile config "addons-335640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:59:31.689370   19546 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-335640"
	I1011 20:59:31.689373   19546 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-335640"
	I1011 20:59:31.689363   19546 addons.go:69] Setting storage-provisioner=true in profile "addons-335640"
	I1011 20:59:31.689392   19546 addons.go:69] Setting cloud-spanner=true in profile "addons-335640"
	I1011 20:59:31.689364   19546 addons.go:234] Setting addon inspektor-gadget=true in "addons-335640"
	I1011 20:59:31.689402   19546 addons.go:234] Setting addon cloud-spanner=true in "addons-335640"
	I1011 20:59:31.689391   19546 addons.go:69] Setting volcano=true in profile "addons-335640"
	I1011 20:59:31.689411   19546 addons.go:69] Setting metrics-server=true in profile "addons-335640"
	I1011 20:59:31.689419   19546 addons.go:234] Setting addon volcano=true in "addons-335640"
	I1011 20:59:31.689421   19546 addons.go:234] Setting addon metrics-server=true in "addons-335640"
	I1011 20:59:31.689428   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689424   19546 addons.go:69] Setting gcp-auth=true in profile "addons-335640"
	I1011 20:59:31.689447   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689447   19546 addons.go:69] Setting ingress-dns=true in profile "addons-335640"
	I1011 20:59:31.689458   19546 addons.go:234] Setting addon ingress-dns=true in "addons-335640"
	I1011 20:59:31.689461   19546 mustload.go:65] Loading cluster: addons-335640
	I1011 20:59:31.689467   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689496   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689656   19546 config.go:182] Loaded profile config "addons-335640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 20:59:31.689348   19546 addons.go:234] Setting addon yakd=true in "addons-335640"
	I1011 20:59:31.689770   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689848   19546 addons.go:69] Setting volumesnapshots=true in profile "addons-335640"
	I1011 20:59:31.689857   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689861   19546 addons.go:234] Setting addon volumesnapshots=true in "addons-335640"
	I1011 20:59:31.689863   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689878   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689882   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.689895   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689926   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689947   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689946   19546 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-335640"
	I1011 20:59:31.689959   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.689979   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689981   19546 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-335640"
	I1011 20:59:31.689983   19546 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-335640"
	I1011 20:59:31.689994   19546 addons.go:69] Setting registry=true in profile "addons-335640"
	I1011 20:59:31.689997   19546 addons.go:69] Setting default-storageclass=true in profile "addons-335640"
	I1011 20:59:31.690005   19546 addons.go:234] Setting addon registry=true in "addons-335640"
	I1011 20:59:31.690010   19546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-335640"
	I1011 20:59:31.690023   19546 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-335640"
	I1011 20:59:31.689430   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690105   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690138   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690172   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690205   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690240   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.689437   19546 addons.go:69] Setting ingress=true in profile "addons-335640"
	I1011 20:59:31.690379   19546 addons.go:234] Setting addon ingress=true in "addons-335640"
	I1011 20:59:31.689385   19546 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-335640"
	I1011 20:59:31.689402   19546 addons.go:234] Setting addon storage-provisioner=true in "addons-335640"
	I1011 20:59:31.690494   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690519   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690581   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690594   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690630   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690597   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690665   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690676   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.690679   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690703   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.690802   19546 out.go:177] * Verifying Kubernetes components...
	I1011 20:59:31.690916   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690919   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.690981   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691008   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.691020   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691036   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.691108   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.691202   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691228   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.691316   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.691360   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.695773   19546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:59:31.707626   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42121
	I1011 20:59:31.708159   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.709285   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.709309   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.709710   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.710707   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I1011 20:59:31.710714   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I1011 20:59:31.711164   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.711208   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.711525   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.711562   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.711169   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.714054   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.714109   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41851
	I1011 20:59:31.715039   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.715160   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.715256   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.715532   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.715546   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.715680   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.715693   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.715821   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.715847   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.716191   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.716252   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.716260   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.716792   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.716826   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.725213   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I1011 20:59:31.725617   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.726114   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.726135   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.726492   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.726685   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.728487   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.728887   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.728933   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.739050   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.739102   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.739058   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.739200   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.746492   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I1011 20:59:31.747007   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.747846   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.747889   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.747975   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I1011 20:59:31.748214   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.748750   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.748790   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.749002   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I1011 20:59:31.753536   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.754114   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.754140   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.754511   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.755117   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.755150   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.755388   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.755705   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I1011 20:59:31.756011   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.756026   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.756094   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.756601   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.756616   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.757004   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.757058   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I1011 20:59:31.757312   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.757430   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.757534   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.758077   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.758122   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.758376   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1011 20:59:31.758469   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.758485   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.758847   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.758920   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.759432   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.759476   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.759707   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.759752   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.759775   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.760178   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.760795   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.760827   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.782590   19546 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1011 20:59:31.783290   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1011 20:59:31.783318   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I1011 20:59:31.783455   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I1011 20:59:31.783522   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I1011 20:59:31.783576   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I1011 20:59:31.783622   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1011 20:59:31.783708   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
	I1011 20:59:31.783758   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I1011 20:59:31.783897   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.783979   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.784136   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.784345   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1011 20:59:31.784360   19546 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1011 20:59:31.784382   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.784753   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.784914   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.784926   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.784982   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.785066   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785078   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.785091   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.785553   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785572   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.785628   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.785662   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.785779   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785789   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.785823   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.785833   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.785838   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.786083   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.786100   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.786151   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.786188   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786640   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786644   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786677   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.786845   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.786869   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.787291   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.787324   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.788152   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.788828   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.788844   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.788903   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.789606   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.789636   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.789931   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.790519   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.790554   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.791238   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.791584   19546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 20:59:31.791774   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1011 20:59:31.791889   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.791907   19546 addons.go:234] Setting addon default-storageclass=true in "addons-335640"
	I1011 20:59:31.791910   19546 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-335640"
	I1011 20:59:31.791919   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.791936   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I1011 20:59:31.791943   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.791954   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:31.792096   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.792284   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.792313   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.792445   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.792483   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.792488   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.792699   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.792816   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.792992   19546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:59:31.793004   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 20:59:31.793017   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.793076   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1011 20:59:31.793084   19546 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1011 20:59:31.793098   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.793166   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I1011 20:59:31.793451   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.793747   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.793852   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.794316   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.794331   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.794826   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.794850   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.794985   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.794994   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.795350   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.795572   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.795616   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.795805   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.796306   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.796753   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.796773   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.796899   19546 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1011 20:59:31.797045   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.797074   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.797571   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.797591   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.797778   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.797902   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.797982   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.798179   19546 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1011 20:59:31.798184   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.798195   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1011 20:59:31.798215   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.798320   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.799430   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.799932   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.800386   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.800406   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.800414   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.800993   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.801092   19546 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1011 20:59:31.801214   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.801336   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.802073   19546 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:59:31.802094   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1011 20:59:31.802111   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.802184   19546 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1011 20:59:31.802435   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1011 20:59:31.802818   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.802865   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.803269   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.803290   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.803355   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.803379   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.803457   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.803607   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.803623   19546 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1011 20:59:31.803641   19546 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1011 20:59:31.803664   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.803717   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.803821   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.804422   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.804587   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.806065   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.806272   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:31.806294   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:31.808245   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.808304   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.808327   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.808343   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.808363   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:31.808383   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:31.808390   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:31.808397   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:31.808403   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:31.808637   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.808779   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.808926   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.809173   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.809434   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.809452   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.809494   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:31.809590   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:31.809604   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	W1011 20:59:31.809674   19546 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1011 20:59:31.809721   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.809864   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.809993   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.810133   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.810381   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44055
	I1011 20:59:31.810961   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.811518   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.811533   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.811934   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.812098   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.812316   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I1011 20:59:31.812648   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.812913   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I1011 20:59:31.813120   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.813131   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.813762   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.814359   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.814386   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.815087   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1011 20:59:31.815183   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.815322   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.815690   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.815762   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.815776   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.816051   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.816199   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.817487   19546 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1011 20:59:31.817531   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.817558   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.817894   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.818315   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.818522   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 20:59:31.818540   19546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 20:59:31.818559   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.820953   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.821679   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.822067   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.822097   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.822225   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.822357   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.822514   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.822649   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.822829   19546 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1011 20:59:31.823300   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I1011 20:59:31.823749   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.824167   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.824185   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.824255   19546 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:59:31.824271   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1011 20:59:31.824288   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.824493   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.824654   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.826512   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.827401   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.827865   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.827955   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.827991   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1011 20:59:31.828086   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.828228   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.828385   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.828499   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.829915   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I1011 20:59:31.830209   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.830298   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I1011 20:59:31.830744   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.830760   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.831113   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.831285   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.831721   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.831736   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.832081   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.832159   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.832191   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.832223   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.832337   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1011 20:59:31.833536   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1011 20:59:31.834017   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.835249   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1011 20:59:31.835270   19546 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1011 20:59:31.836323   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1011 20:59:31.836352   19546 out.go:177]   - Using image docker.io/registry:2.8.3
	I1011 20:59:31.837436   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1011 20:59:31.837487   19546 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1011 20:59:31.837499   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1011 20:59:31.837520   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.837781   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41225
	I1011 20:59:31.838145   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I1011 20:59:31.838453   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.838879   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.838892   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.839169   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.839591   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:31.839619   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:31.839756   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1011 20:59:31.840524   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.840899   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.840941   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.841109   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.841301   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.841359   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.841502   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.841604   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.841714   19546 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1011 20:59:31.841925   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.841941   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.842281   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.842448   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.842661   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1011 20:59:31.842674   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1011 20:59:31.842686   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.843978   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.844118   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
	I1011 20:59:31.844544   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.844899   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.844916   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.845250   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.845404   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.845513   19546 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1011 20:59:31.846148   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.846490   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.846511   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.846793   19546 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:59:31.846800   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.846807   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1011 20:59:31.846819   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.846796   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.846984   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.847147   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.847259   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.848940   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:31.849299   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.849684   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.849716   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.849829   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.850005   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.850147   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.850265   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.850971   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:31.851917   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1011 20:59:31.853051   19546 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:59:31.853067   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1011 20:59:31.853082   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.855625   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.855980   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.856009   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.856172   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.856352   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.856497   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.856634   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:31.857840   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I1011 20:59:31.858221   19546 main.go:141] libmachine: () Calling .GetVersion
	W1011 20:59:31.858239   19546 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33242->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.858264   19546 retry.go:31] will retry after 195.444762ms: ssh: handshake failed: read tcp 192.168.39.1:33242->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.858607   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.858638   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.859019   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.859230   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.861010   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.862893   19546 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1011 20:59:31.864383   19546 out.go:177]   - Using image docker.io/busybox:stable
	I1011 20:59:31.865603   19546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:59:31.865624   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1011 20:59:31.865640   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.868493   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.868956   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.868974   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.869139   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.869310   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.869457   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.869576   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	W1011 20:59:31.870144   19546 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33248->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.870172   19546 retry.go:31] will retry after 193.429446ms: ssh: handshake failed: read tcp 192.168.39.1:33248->192.168.39.109:22: read: connection reset by peer
	I1011 20:59:31.870725   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1011 20:59:31.871018   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:31.871445   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:31.871466   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:31.871755   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:31.871937   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:31.873248   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:31.873987   19546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 20:59:31.874003   19546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 20:59:31.874019   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:31.876472   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.876810   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:31.876832   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:31.876972   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:31.877138   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:31.877278   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:31.877391   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:32.196403   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1011 20:59:32.196428   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1011 20:59:32.199602   19546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:59:32.199692   19546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 20:59:32.230810   19546 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1011 20:59:32.230834   19546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1011 20:59:32.258327   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1011 20:59:32.258347   19546 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1011 20:59:32.277837   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1011 20:59:32.277855   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1011 20:59:32.287783   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1011 20:59:32.305893   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 20:59:32.305912   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1011 20:59:32.308569   19546 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1011 20:59:32.308587   19546 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1011 20:59:32.327745   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:59:32.333696   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:59:32.357074   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:59:32.364167   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 20:59:32.378495   19546 node_ready.go:35] waiting up to 6m0s for node "addons-335640" to be "Ready" ...
	I1011 20:59:32.387230   19546 node_ready.go:49] node "addons-335640" has status "Ready":"True"
	I1011 20:59:32.387249   19546 node_ready.go:38] duration metric: took 8.732384ms for node "addons-335640" to be "Ready" ...
	I1011 20:59:32.387259   19546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:32.390062   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:59:32.402642   19546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:32.412976   19546 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:59:32.412995   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1011 20:59:32.423735   19546 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1011 20:59:32.423753   19546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1011 20:59:32.480896   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:59:32.510218   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1011 20:59:32.510236   19546 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1011 20:59:32.535556   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1011 20:59:32.535581   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1011 20:59:32.579994   19546 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:59:32.580020   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1011 20:59:32.671703   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 20:59:32.671723   19546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 20:59:32.673061   19546 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1011 20:59:32.673075   19546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1011 20:59:32.675649   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1011 20:59:32.675663   19546 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1011 20:59:32.693304   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:59:32.749313   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1011 20:59:32.749344   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1011 20:59:32.784844   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:59:32.842750   19546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:59:32.842777   19546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 20:59:32.859259   19546 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:59:32.859280   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1011 20:59:32.882999   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:59:32.924624   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1011 20:59:32.924643   19546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1011 20:59:32.959502   19546 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1011 20:59:32.959522   19546 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1011 20:59:33.048908   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:59:33.065364   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:59:33.178985   19546 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:33.179010   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1011 20:59:33.181022   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1011 20:59:33.181040   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1011 20:59:33.490032   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1011 20:59:33.490057   19546 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1011 20:59:33.532631   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:33.726842   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1011 20:59:33.726863   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1011 20:59:34.043299   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1011 20:59:34.043318   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1011 20:59:34.369693   19546 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:59:34.369715   19546 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1011 20:59:34.435832   19546 pod_ready.go:103] pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:34.736855   19546 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.537125987s)
	I1011 20:59:34.736881   19546 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1011 20:59:34.830333   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:59:34.932968   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.60519175s)
	I1011 20:59:34.933016   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933026   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933051   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.645237718s)
	I1011 20:59:34.933086   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933104   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933301   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933311   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933320   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:34.933323   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:34.933329   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933336   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933342   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:34.933349   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:34.933534   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933586   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:34.933598   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:34.933630   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:34.933641   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:34.933651   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:35.241587   19546 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-335640" context rescaled to 1 replicas
	I1011 20:59:36.514579   19546 pod_ready.go:103] pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:36.629937   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.296211247s)
	I1011 20:59:36.629995   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.629997   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.272895818s)
	I1011 20:59:36.630014   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630031   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630038   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.265849132s)
	I1011 20:59:36.630049   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630057   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630070   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630116   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.240029822s)
	I1011 20:59:36.630148   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630159   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630378   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630391   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630425   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630430   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630434   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630445   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630430   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630449   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630456   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630468   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630477   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630477   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630485   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630410   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630492   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630499   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630508   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.630490   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630410   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630514   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.630860   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630884   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630883   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.630892   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630915   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630922   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.630930   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.630937   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.631958   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:36.631986   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.631993   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:36.753902   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:36.753925   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:36.754146   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:36.754162   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:38.832790   19546 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1011 20:59:38.832835   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:38.836274   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:38.836748   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:38.836778   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:38.836984   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:38.837188   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:38.837357   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:38.837513   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:38.978581   19546 pod_ready.go:93] pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:38.978606   19546 pod_ready.go:82] duration metric: took 6.575937129s for pod "coredns-7c65d6cfc9-c8225" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:38.978628   19546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:39.156654   19546 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1011 20:59:39.285104   19546 addons.go:234] Setting addon gcp-auth=true in "addons-335640"
	I1011 20:59:39.285156   19546 host.go:66] Checking if "addons-335640" exists ...
	I1011 20:59:39.285490   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:39.285519   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:39.300693   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I1011 20:59:39.301272   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:39.301793   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:39.301816   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:39.302143   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:39.302628   19546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 20:59:39.302656   19546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 20:59:39.316876   19546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I1011 20:59:39.317228   19546 main.go:141] libmachine: () Calling .GetVersion
	I1011 20:59:39.317653   19546 main.go:141] libmachine: Using API Version  1
	I1011 20:59:39.317676   19546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 20:59:39.317988   19546 main.go:141] libmachine: () Calling .GetMachineName
	I1011 20:59:39.318191   19546 main.go:141] libmachine: (addons-335640) Calling .GetState
	I1011 20:59:39.319625   19546 main.go:141] libmachine: (addons-335640) Calling .DriverName
	I1011 20:59:39.319818   19546 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1011 20:59:39.319838   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHHostname
	I1011 20:59:39.322281   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:39.322662   19546 main.go:141] libmachine: (addons-335640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:e5:d7", ip: ""} in network mk-addons-335640: {Iface:virbr1 ExpiryTime:2024-10-11 21:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:e5:d7 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:addons-335640 Clientid:01:52:54:00:8b:e5:d7}
	I1011 20:59:39.322687   19546 main.go:141] libmachine: (addons-335640) DBG | domain addons-335640 has defined IP address 192.168.39.109 and MAC address 52:54:00:8b:e5:d7 in network mk-addons-335640
	I1011 20:59:39.322795   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHPort
	I1011 20:59:39.322946   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHKeyPath
	I1011 20:59:39.323083   19546 main.go:141] libmachine: (addons-335640) Calling .GetSSHUsername
	I1011 20:59:39.323187   19546 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/addons-335640/id_rsa Username:docker}
	I1011 20:59:40.265296   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.571961251s)
	I1011 20:59:40.265348   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265349   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.784421586s)
	I1011 20:59:40.265372   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265381   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265399   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265382   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.480507112s)
	I1011 20:59:40.265440   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.382423621s)
	I1011 20:59:40.265472   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265482   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265451   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265514   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265518   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.216575985s)
	I1011 20:59:40.265537   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265548   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265667   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.200269042s)
	I1011 20:59:40.265688   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.265697   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.265843   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.733178263s)
	W1011 20:59:40.265869   19546 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:40.265899   19546 retry.go:31] will retry after 274.780509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:40.265976   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.265991   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266004   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266028   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266034   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266041   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266047   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266095   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266100   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266107   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266115   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266165   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266172   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266180   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266185   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266413   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266434   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266455   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266458   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266465   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266466   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266474   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266482   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266474   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.266520   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.266711   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266727   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.266791   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.266798   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.266806   19546 addons.go:475] Verifying addon registry=true in "addons-335640"
	I1011 20:59:40.266996   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267004   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267699   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267710   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267718   19546 addons.go:475] Verifying addon metrics-server=true in "addons-335640"
	I1011 20:59:40.267802   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.267834   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.267836   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267849   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267854   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267856   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.267863   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267865   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.267871   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.267874   19546 addons.go:475] Verifying addon ingress=true in "addons-335640"
	I1011 20:59:40.267878   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.268151   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:40.268170   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.268680   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.268915   19546 out.go:177] * Verifying registry addon...
	I1011 20:59:40.269991   19546 out.go:177] * Verifying ingress addon...
	I1011 20:59:40.270823   19546 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-335640 service yakd-dashboard -n yakd-dashboard
	
	I1011 20:59:40.272729   19546 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1011 20:59:40.273933   19546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1011 20:59:40.309895   19546 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:40.309918   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.310256   19546 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1011 20:59:40.310277   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.342406   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:40.342427   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:40.342701   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:40.342768   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:40.541814   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:40.785995   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:40.787614   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.063310   19546 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:41.280842   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.281430   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.591469   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.761086239s)
	I1011 20:59:41.591505   19546 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.271665694s)
	I1011 20:59:41.591521   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:41.591539   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:41.591809   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:41.591834   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:41.591840   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:41.591847   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:41.591856   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:41.592126   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:41.592197   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:41.592215   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:41.592224   19546 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-335640"
	I1011 20:59:41.593332   19546 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:41.594222   19546 out.go:177] * Verifying csi-hostpath-driver addon...
	I1011 20:59:41.595990   19546 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1011 20:59:41.596706   19546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1011 20:59:41.597298   19546 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1011 20:59:41.597317   19546 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1011 20:59:41.627782   19546 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:41.627802   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.665781   19546 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1011 20:59:41.665806   19546 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1011 20:59:41.739758   19546 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:41.739783   19546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1011 20:59:41.782477   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.782709   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:41.826006   19546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:42.101157   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.282768   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.283945   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.448155   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.906288354s)
	I1011 20:59:42.448208   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:42.448222   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:42.448490   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:42.448539   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:42.448564   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:42.448580   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:42.448588   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:42.448808   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:42.448833   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:42.448843   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:42.602424   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.779678   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:42.780288   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.123368   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.170907   19546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.344861361s)
	I1011 20:59:43.170953   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:43.170967   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:43.171242   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:43.171255   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:43.171269   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:43.171278   19546 main.go:141] libmachine: Making call to close driver server
	I1011 20:59:43.171311   19546 main.go:141] libmachine: (addons-335640) Calling .Close
	I1011 20:59:43.171543   19546 main.go:141] libmachine: Successfully made call to close driver server
	I1011 20:59:43.171562   19546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 20:59:43.171564   19546 main.go:141] libmachine: (addons-335640) DBG | Closing plugin on server side
	I1011 20:59:43.173567   19546 addons.go:475] Verifying addon gcp-auth=true in "addons-335640"
	I1011 20:59:43.175083   19546 out.go:177] * Verifying gcp-auth addon...
	I1011 20:59:43.177037   19546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1011 20:59:43.250440   19546 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1011 20:59:43.250462   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.349476   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.349614   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.484792   19546 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f7488" not found
	I1011 20:59:43.484816   19546 pod_ready.go:82] duration metric: took 4.506180283s for pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace to be "Ready" ...
	E1011 20:59:43.484827   19546 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-f7488" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f7488" not found
	I1011 20:59:43.484834   19546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.497842   19546 pod_ready.go:93] pod "etcd-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.497875   19546 pod_ready.go:82] duration metric: took 13.030373ms for pod "etcd-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.497888   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.506654   19546 pod_ready.go:93] pod "kube-apiserver-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.506679   19546 pod_ready.go:82] duration metric: took 8.78243ms for pod "kube-apiserver-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.506690   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.518732   19546 pod_ready.go:93] pod "kube-controller-manager-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.518754   19546 pod_ready.go:82] duration metric: took 12.056359ms for pod "kube-controller-manager-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.518766   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjszr" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.526272   19546 pod_ready.go:93] pod "kube-proxy-pjszr" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.526294   19546 pod_ready.go:82] duration metric: took 7.516668ms for pod "kube-proxy-pjszr" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.526306   19546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.601885   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.682392   19546 pod_ready.go:93] pod "kube-scheduler-addons-335640" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:43.682412   19546 pod_ready.go:82] duration metric: took 156.091647ms for pod "kube-scheduler-addons-335640" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:43.682419   19546 pod_ready.go:39] duration metric: took 11.295148573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:43.682434   19546 api_server.go:52] waiting for apiserver process to appear ...
	I1011 20:59:43.682492   19546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 20:59:43.703565   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:43.719787   19546 api_server.go:72] duration metric: took 12.030598076s to wait for apiserver process to appear ...
	I1011 20:59:43.719813   19546 api_server.go:88] waiting for apiserver healthz status ...
	I1011 20:59:43.719835   19546 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1011 20:59:43.724897   19546 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I1011 20:59:43.726292   19546 api_server.go:141] control plane version: v1.31.1
	I1011 20:59:43.726314   19546 api_server.go:131] duration metric: took 6.493799ms to wait for apiserver health ...
	I1011 20:59:43.726322   19546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 20:59:43.778701   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.778701   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:43.889202   19546 system_pods.go:59] 18 kube-system pods found
	I1011 20:59:43.889229   19546 system_pods.go:61] "amd-gpu-device-plugin-9lfb2" [5e9f5699-a31f-43bd-9cc8-96ce96a3c580] Running
	I1011 20:59:43.889236   19546 system_pods.go:61] "coredns-7c65d6cfc9-c8225" [8bfebaba-1d36-43d9-81be-28300ec9e5f1] Running
	I1011 20:59:43.889242   19546 system_pods.go:61] "csi-hostpath-attacher-0" [935d8f6e-845b-4c20-b293-05d78c9d6470] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1011 20:59:43.889249   19546 system_pods.go:61] "csi-hostpath-resizer-0" [8c4e1169-a2c9-4ef7-bd1d-0f34c0779f64] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1011 20:59:43.889259   19546 system_pods.go:61] "csi-hostpathplugin-5bbrd" [15c420a2-bc23-4178-a7e7-424c14f1cdee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1011 20:59:43.889263   19546 system_pods.go:61] "etcd-addons-335640" [a5f2fe46-b853-4d7d-b18c-877e9328560c] Running
	I1011 20:59:43.889267   19546 system_pods.go:61] "kube-apiserver-addons-335640" [a1d31822-8e1b-4983-9b71-678270e37220] Running
	I1011 20:59:43.889271   19546 system_pods.go:61] "kube-controller-manager-addons-335640" [871e3fb0-541c-49fb-b7cc-b52516a8ccb2] Running
	I1011 20:59:43.889276   19546 system_pods.go:61] "kube-ingress-dns-minikube" [04cb67bc-78e9-4d22-8172-f3d24200627e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1011 20:59:43.889280   19546 system_pods.go:61] "kube-proxy-pjszr" [e3663ee2-aeb3-4c62-a737-e095cc1897aa] Running
	I1011 20:59:43.889284   19546 system_pods.go:61] "kube-scheduler-addons-335640" [d102a4da-3781-4045-b2a0-0984be417b76] Running
	I1011 20:59:43.889289   19546 system_pods.go:61] "metrics-server-84c5f94fbc-zmj4b" [8ec1bee3-86d5-4b1b-ba8e-96e9786005cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 20:59:43.889298   19546 system_pods.go:61] "nvidia-device-plugin-daemonset-4rwwd" [fdff7711-2b34-4674-b560-4769911e0b24] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1011 20:59:43.889306   19546 system_pods.go:61] "registry-66c9cd494c-fscdh" [b7eae652-7687-4daf-bcb5-ba3501d88f5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1011 20:59:43.889312   19546 system_pods.go:61] "registry-proxy-9bpbj" [ce628b0d-73e1-4fa3-a071-c9091c1ae2ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1011 20:59:43.889317   19546 system_pods.go:61] "snapshot-controller-56fcc65765-tx42p" [9011781b-8f93-423b-bd92-d3df096f9a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:43.889324   19546 system_pods.go:61] "snapshot-controller-56fcc65765-wtz96" [ecfec52c-a8f2-454d-8a60-688497d37e44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:43.889329   19546 system_pods.go:61] "storage-provisioner" [e3064aeb-986a-48a2-9387-5a63fa2360bb] Running
	I1011 20:59:43.889335   19546 system_pods.go:74] duration metric: took 163.008614ms to wait for pod list to return data ...
	I1011 20:59:43.889342   19546 default_sa.go:34] waiting for default service account to be created ...
	I1011 20:59:44.082350   19546 default_sa.go:45] found service account: "default"
	I1011 20:59:44.082376   19546 default_sa.go:55] duration metric: took 193.025411ms for default service account to be created ...
	I1011 20:59:44.082386   19546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 20:59:44.101504   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.180781   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.277560   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.279316   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:44.286356   19546 system_pods.go:86] 18 kube-system pods found
	I1011 20:59:44.286376   19546 system_pods.go:89] "amd-gpu-device-plugin-9lfb2" [5e9f5699-a31f-43bd-9cc8-96ce96a3c580] Running
	I1011 20:59:44.286381   19546 system_pods.go:89] "coredns-7c65d6cfc9-c8225" [8bfebaba-1d36-43d9-81be-28300ec9e5f1] Running
	I1011 20:59:44.286388   19546 system_pods.go:89] "csi-hostpath-attacher-0" [935d8f6e-845b-4c20-b293-05d78c9d6470] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1011 20:59:44.286394   19546 system_pods.go:89] "csi-hostpath-resizer-0" [8c4e1169-a2c9-4ef7-bd1d-0f34c0779f64] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1011 20:59:44.286402   19546 system_pods.go:89] "csi-hostpathplugin-5bbrd" [15c420a2-bc23-4178-a7e7-424c14f1cdee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1011 20:59:44.286409   19546 system_pods.go:89] "etcd-addons-335640" [a5f2fe46-b853-4d7d-b18c-877e9328560c] Running
	I1011 20:59:44.286414   19546 system_pods.go:89] "kube-apiserver-addons-335640" [a1d31822-8e1b-4983-9b71-678270e37220] Running
	I1011 20:59:44.286417   19546 system_pods.go:89] "kube-controller-manager-addons-335640" [871e3fb0-541c-49fb-b7cc-b52516a8ccb2] Running
	I1011 20:59:44.286427   19546 system_pods.go:89] "kube-ingress-dns-minikube" [04cb67bc-78e9-4d22-8172-f3d24200627e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1011 20:59:44.286431   19546 system_pods.go:89] "kube-proxy-pjszr" [e3663ee2-aeb3-4c62-a737-e095cc1897aa] Running
	I1011 20:59:44.286437   19546 system_pods.go:89] "kube-scheduler-addons-335640" [d102a4da-3781-4045-b2a0-0984be417b76] Running
	I1011 20:59:44.286442   19546 system_pods.go:89] "metrics-server-84c5f94fbc-zmj4b" [8ec1bee3-86d5-4b1b-ba8e-96e9786005cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 20:59:44.286449   19546 system_pods.go:89] "nvidia-device-plugin-daemonset-4rwwd" [fdff7711-2b34-4674-b560-4769911e0b24] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1011 20:59:44.286454   19546 system_pods.go:89] "registry-66c9cd494c-fscdh" [b7eae652-7687-4daf-bcb5-ba3501d88f5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1011 20:59:44.286463   19546 system_pods.go:89] "registry-proxy-9bpbj" [ce628b0d-73e1-4fa3-a071-c9091c1ae2ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1011 20:59:44.286470   19546 system_pods.go:89] "snapshot-controller-56fcc65765-tx42p" [9011781b-8f93-423b-bd92-d3df096f9a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:44.286476   19546 system_pods.go:89] "snapshot-controller-56fcc65765-wtz96" [ecfec52c-a8f2-454d-8a60-688497d37e44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:44.286481   19546 system_pods.go:89] "storage-provisioner" [e3064aeb-986a-48a2-9387-5a63fa2360bb] Running
	I1011 20:59:44.286488   19546 system_pods.go:126] duration metric: took 204.096425ms to wait for k8s-apps to be running ...
	I1011 20:59:44.286496   19546 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 20:59:44.286535   19546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 20:59:44.302602   19546 system_svc.go:56] duration metric: took 16.102828ms WaitForService to wait for kubelet
	I1011 20:59:44.302630   19546 kubeadm.go:582] duration metric: took 12.613443676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:59:44.302649   19546 node_conditions.go:102] verifying NodePressure condition ...
	I1011 20:59:44.482586   19546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 20:59:44.482629   19546 node_conditions.go:123] node cpu capacity is 2
	I1011 20:59:44.482646   19546 node_conditions.go:105] duration metric: took 179.989874ms to run NodePressure ...
	I1011 20:59:44.482657   19546 start.go:241] waiting for startup goroutines ...
	I1011 20:59:44.602468   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.680649   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:44.777026   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.778223   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.102303   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.180492   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.278526   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.278848   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.601042   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.681299   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:45.777535   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:45.778416   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.101316   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.180469   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.278668   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.278692   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.601216   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.680857   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:46.778350   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:46.778796   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.101061   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.180142   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.278280   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:47.278404   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.600990   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.679821   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:47.777408   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.778196   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.323238   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.323343   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.323438   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.323495   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.602022   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.681614   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:48.779220   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:48.779346   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.100983   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.179899   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.279084   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.279588   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.601768   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.680781   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:49.777833   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:49.778405   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.101629   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.181276   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.279072   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.279203   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.600831   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.681140   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:50.778528   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:50.779250   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.101266   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.200850   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.302405   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.302591   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:51.600886   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.681458   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:51.777902   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.778603   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.101695   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.180298   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.277415   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:52.280852   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.601565   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.680858   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:52.777815   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.778921   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.100913   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.180999   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.277282   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:53.278290   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.601023   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.680370   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:53.778935   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.778978   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.101275   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.180646   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.277835   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.278481   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.601313   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.680854   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:54.778097   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:54.778506   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.101360   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.180922   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.278290   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:55.278948   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.602076   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.680758   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:55.777246   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.778654   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.101967   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.318584   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.319561   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:56.320410   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.601022   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.680289   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:56.777706   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.778548   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.101131   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.180836   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.277731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.278035   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.602112   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.680373   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:57.778295   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:57.778571   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.101811   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.181346   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.278293   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.278469   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.601366   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.680961   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:58.778586   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:58.779010   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.102082   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.230880   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.279002   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.279352   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.604348   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.703011   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 20:59:59.778106   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:59.778145   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.102259   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.180423   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.277713   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.278153   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.601855   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.681062   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:00.777450   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:00.777727   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.102357   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.180322   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.278702   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.280800   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.602072   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.680796   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:01.777530   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:01.779283   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.102354   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.180563   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.278869   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.279212   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.602870   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.681704   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:02.778284   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:02.778396   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.102957   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.181010   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.279187   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.279367   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.602880   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:03.681769   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:03.778408   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:03.779145   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.102863   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.181412   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.278836   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:04.278972   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.602203   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:04.681414   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:04.778235   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.778785   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.101772   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.180966   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.278318   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.278571   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.602286   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:05.681731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:05.778557   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:05.779210   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.241356   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.241661   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.279024   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.279246   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.600945   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:06.680752   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:06.779105   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:06.779353   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.101900   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.180244   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.279695   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.279739   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.603285   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:07.680374   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:07.779212   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:07.780280   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.104401   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.181284   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.549622   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:08.550106   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.601472   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:08.681267   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:08.779965   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.780761   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.102810   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.182004   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.278260   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.278949   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:09.601887   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:09.681292   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:09.780282   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.780426   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.102328   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.183069   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.284196   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.284968   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.602275   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:10.681159   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:10.777949   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:10.778114   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.101731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.181926   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.284165   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.284394   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.601352   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:11.680786   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:11.778470   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:11.778488   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.101929   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.206812   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.278896   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.279080   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.603298   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:12.680466   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:12.779513   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:12.779890   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.102788   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.180633   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.278051   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.278194   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.602645   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:13.702479   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:13.778950   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:13.779518   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.102577   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.180727   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.280096   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.286176   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.606858   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:14.681029   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:14.777984   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:14.778431   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.102385   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.180465   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.666215   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:15.666988   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.769542   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:15.770598   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:15.870962   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.874558   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.105711   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.180938   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.278839   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:16.279399   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.601906   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:16.681267   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:16.777529   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.778250   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.102572   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.181090   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.277851   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.278126   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:17.607506   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:17.681236   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:17.777810   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.778090   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.101913   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.181295   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.278536   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.278853   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.602471   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:18.681058   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:18.779507   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:18.779662   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.102086   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.181480   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.279376   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.279519   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:19.601942   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:19.680808   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:19.779044   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:19.779359   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.102046   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.181063   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.279098   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:20.279207   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.602041   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:20.681986   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:20.779796   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:20.779894   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.101411   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.180963   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.278891   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.279160   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:21.637418   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:21.681069   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:21.778746   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:21.779126   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.102279   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.180649   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:22.278528   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.278850   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:22.601289   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:22.681029   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:22.777590   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:22.778589   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.102198   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.182123   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.277603   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.278052   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:23.602581   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:23.680823   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:23.777967   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:23.778014   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.100884   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.181632   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.277763   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.278004   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:24.601167   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:24.680633   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:24.779080   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:24.779439   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.102093   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.180706   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.278009   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.279127   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:25.600933   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:25.680478   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:25.779536   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:25.779689   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.102221   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.181563   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.277859   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.279104   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:26.601390   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:26.939567   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:26.939719   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:26.939888   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.101785   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.181158   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.278246   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:27.279624   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.601685   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:27.680817   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:27.777683   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:27.778574   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.102337   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.180519   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.280021   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.280344   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:28.601489   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:28.680750   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:28.778301   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 21:00:28.778651   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.101771   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.180935   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.277836   19546 kapi.go:107] duration metric: took 49.00389833s to wait for kubernetes.io/minikube-addons=registry ...
	I1011 21:00:29.278069   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:29.601770   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:29.681011   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:29.778457   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.102338   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.179988   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.278857   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:30.602716   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:30.680968   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:30.781397   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.102325   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.182038   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.278187   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:31.601203   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:31.680393   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:31.777601   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.101237   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.181237   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.277926   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:32.601958   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:32.679826   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:32.777279   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.102510   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.180637   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.277580   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:33.601103   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:33.680583   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.778661   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.101835   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.180767   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.277058   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:34.602168   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:34.680763   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.777242   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.108549   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.186928   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.277602   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:35.601674   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:35.681855   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.778295   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.101953   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.179908   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.277472   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:36.601038   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:36.680412   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.777479   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.102180   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.183660   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.278041   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:37.604315   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:37.680818   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:37.777763   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.107029   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.296155   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.296756   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:38.620957   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:38.709691   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:38.777660   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:39.103640   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.180441   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.277775   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:39.601665   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:39.681143   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:39.790189   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:40.112788   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.181407   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.278834   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:40.602247   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:40.680496   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:40.779851   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:41.102691   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.181181   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.278109   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:41.602250   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:41.680549   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:41.778251   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:42.104023   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.205549   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.305335   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:42.602290   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:42.680980   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:42.777679   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:43.102345   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.181129   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.279731   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:43.602542   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:43.680160   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:43.777834   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:44.102075   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.202231   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.277736   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:44.601913   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:44.680089   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:44.779807   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:45.102517   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.181144   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.277753   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:45.607839   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:45.682298   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:45.784365   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:46.102431   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.186415   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.278401   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:46.601951   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:46.680766   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:46.777869   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:47.101304   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.180618   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.278879   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:47.602338   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:47.680269   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:47.780804   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:48.102150   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.201526   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.304379   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:48.602791   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:48.681712   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:48.777454   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:49.101148   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.180865   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.277688   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:49.602182   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:49.680806   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:49.777753   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:50.102305   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.179878   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.278477   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:50.601113   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:50.680458   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:50.777924   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:51.102696   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.180998   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:51.277301   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:51.601832   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:51.680830   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:51.778040   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:52.405620   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.406580   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:52.406848   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:52.601793   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:52.680901   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:52.778061   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:53.101539   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.180590   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:53.277189   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:53.600896   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:53.679950   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:53.777613   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:54.101586   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:54.180332   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:54.278181   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:54.603334   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:54.680926   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:54.778008   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:55.109583   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:55.208417   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:55.278126   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:55.601331   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:55.680880   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:55.777763   19546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:56.101886   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:56.202116   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:56.305609   19546 kapi.go:107] duration metric: took 1m16.032872891s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1011 21:00:56.601786   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:56.681914   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:57.102006   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:57.201713   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:57.601778   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:57.682313   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:58.102130   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:58.183764   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:58.601716   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:58.701625   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:59.103815   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:59.202903   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:59.600776   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:59.681705   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:00.101731   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:00.181410   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:00.601818   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:00.681311   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:01.101917   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:01.181449   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:01.603801   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:01.683494   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:02.101681   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:02.181376   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:02.602661   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:02.681056   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:03.101879   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:01:03.181033   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:03.602332   19546 kapi.go:107] duration metric: took 1m22.005619714s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1011 21:01:03.679976   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:04.180635   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:04.681299   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:05.181569   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:05.680409   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:06.181200   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:06.680948   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:07.181127   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:07.680339   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:08.181258   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:08.680825   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:09.183465   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:09.680543   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:10.181680   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:10.680863   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:11.181462   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:11.681747   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:12.181535   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:12.681613   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:13.181334   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:13.681397   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:14.180902   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:14.681836   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:15.181405   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:15.681032   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:16.180958   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:16.681574   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:17.181417   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:17.681553   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:18.181725   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:18.681386   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:19.180756   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:19.681496   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:20.181934   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:20.681209   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:21.181559   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:21.681309   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:22.181052   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:22.680474   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:23.181273   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:23.680992   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:24.180560   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:24.680783   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:25.181133   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:25.681147   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:26.180930   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:26.680630   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:27.181283   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:27.681073   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:28.181121   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:28.680584   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:29.182421   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:29.681279   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:30.181999   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:30.681719   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:31.181004   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:31.680915   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:32.180724   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:32.681941   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:33.181216   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:33.683315   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:34.181671   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:34.682225   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:35.182292   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:35.681933   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:36.181627   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:36.681210   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:37.180588   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:37.681961   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:38.181545   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:38.681209   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:39.181035   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:39.681389   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:40.182694   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:40.680637   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:41.181450   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:41.681020   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:42.180481   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:42.681377   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:43.181015   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:43.681413   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:44.181094   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:44.680957   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:45.181007   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:45.681065   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:46.180839   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:46.680051   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:47.180853   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:47.680314   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:48.181385   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:48.681076   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:49.181120   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:49.680937   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:50.181052   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:50.680808   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:51.181202   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:51.681473   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:52.182374   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:52.681078   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:53.181046   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:53.681157   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:54.180998   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:54.680495   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:55.181234   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:55.682036   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:56.181563   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:56.681035   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:57.181712   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:57.681192   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:58.181240   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:58.680409   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:59.180912   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:01:59.681325   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:00.181341   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:00.681641   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:01.181161   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:01.680999   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:02.180249   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:02.680631   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:03.181370   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:03.681001   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:04.180788   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:04.682209   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:05.181190   19546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:02:05.682287   19546 kapi.go:107] duration metric: took 2m22.50524239s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1011 21:02:05.683755   19546 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-335640 cluster.
	I1011 21:02:05.685316   19546 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1011 21:02:05.686666   19546 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1011 21:02:05.688138   19546 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1011 21:02:05.689465   19546 addons.go:510] duration metric: took 2m34.000257135s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin default-storageclass metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1011 21:02:05.689505   19546 start.go:246] waiting for cluster config update ...
	I1011 21:02:05.689525   19546 start.go:255] writing updated cluster config ...
	I1011 21:02:05.689856   19546 ssh_runner.go:195] Run: rm -f paused
	I1011 21:02:05.744262   19546 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:02:05.746153   19546 out.go:177] * Done! kubectl is now configured to use "addons-335640" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.226826033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680928226796218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7f05671-4c63-4393-a3c2-ff430de830df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.227718606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ac3857f-471d-41c4-b548-31bc9ee5a5b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.227825049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ac3857f-471d-41c4-b548-31bc9ee5a5b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.228234029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f5526765bdde4e168324cf697a90e6119b9c8653aa62800c2e70ec4ed562f6b,PodSandboxId:ef4cc3bb18ef719acfcd5a18d4513ad5f067166752002c5d28da70306bd980c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728680731879706419,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h7nfv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aca5a6ab-8faf-455e-874f-b4f4f33445f1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-b
edc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde0304b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c63478d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:C
ONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b21c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ac3857f-471d-41c4-b548-31bc9ee5a5b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.273892513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce71c4da-2cd2-4581-9882-90407765557f name=/runtime.v1.RuntimeService/Version
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.274024026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce71c4da-2cd2-4581-9882-90407765557f name=/runtime.v1.RuntimeService/Version
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.276601158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24a4fc13-7998-4842-a318-c2f07dee68ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.279016255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680928278975125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24a4fc13-7998-4842-a318-c2f07dee68ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.279659308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56706ac1-6d82-4431-bf47-2d66a671ce75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.279768543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56706ac1-6d82-4431-bf47-2d66a671ce75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.280331244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f5526765bdde4e168324cf697a90e6119b9c8653aa62800c2e70ec4ed562f6b,PodSandboxId:ef4cc3bb18ef719acfcd5a18d4513ad5f067166752002c5d28da70306bd980c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728680731879706419,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h7nfv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aca5a6ab-8faf-455e-874f-b4f4f33445f1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-b
edc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde0304b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c63478d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:C
ONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b21c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56706ac1-6d82-4431-bf47-2d66a671ce75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.335501058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f55a459b-b876-4518-a147-0ceb7bce5c73 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.335605950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f55a459b-b876-4518-a147-0ceb7bce5c73 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.338509996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9715665d-f283-47bd-b1d9-db623fe3251a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.340668933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680928340630671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9715665d-f283-47bd-b1d9-db623fe3251a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.341401044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f74c0466-768a-4efb-b46b-4e62e4c3749b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.341508902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f74c0466-768a-4efb-b46b-4e62e4c3749b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.341964741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f5526765bdde4e168324cf697a90e6119b9c8653aa62800c2e70ec4ed562f6b,PodSandboxId:ef4cc3bb18ef719acfcd5a18d4513ad5f067166752002c5d28da70306bd980c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728680731879706419,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h7nfv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aca5a6ab-8faf-455e-874f-b4f4f33445f1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-b
edc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde0304b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c63478d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:C
ONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b21c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f74c0466-768a-4efb-b46b-4e62e4c3749b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.386395611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97b58c28-9b11-41e3-9bba-f59453c0f437 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.386541313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97b58c28-9b11-41e3-9bba-f59453c0f437 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.389367906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ace01611-57e8-4d16-9ec5-f7b74a52bb0e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.391044534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680928391000151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ace01611-57e8-4d16-9ec5-f7b74a52bb0e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.391757275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bf127b6-6a16-4492-b5aa-4080bd0b062d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.391810747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bf127b6-6a16-4492-b5aa-4080bd0b062d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:08:48 addons-335640 crio[665]: time="2024-10-11 21:08:48.392085575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f5526765bdde4e168324cf697a90e6119b9c8653aa62800c2e70ec4ed562f6b,PodSandboxId:ef4cc3bb18ef719acfcd5a18d4513ad5f067166752002c5d28da70306bd980c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728680731879706419,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h7nfv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aca5a6ab-8faf-455e-874f-b4f4f33445f1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc36dea0fbfd1a03331832bd7dc9683fa3392552ebbe882154cdd3bcdcec649c,PodSandboxId:fb931a82e05b9829f6d84a3245f4aa0ba50faf08cab3549072f87e293201e0de,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728680591253020649,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3af527ef-278e-441a-a261-0483d6809c9a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209691f7026aa4262a9bc7c3e93a598d900410a7318f89d74efba0c1b9f4e8fe,PodSandboxId:a82d83d3733b2e7c5a9a69331001d69488c9279119ddb802823362174b13b552,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728680530540915545,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3caf89f2-1c8a-48d3-b
edc-9796d7b20ff7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af1353666e71fc9569fe33347564a41cc8ed004779d6a9e74c6e9d86aec451,PodSandboxId:99742615d42557cb1b89564d7479c51dc107170174a0ae2e87fd4bea34d9f8e4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728680415801080031,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zmj4b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8ec1bee3-86d5-4b1b-ba8e-96e9786005cc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b8178d0fe502790543771b505ccbfca159d587abca16871dcde78b8a66e139,PodSandboxId:4ee7875e4d7a1693fd523f9106877d5ed6263908819fba836cd7bde0304b99ec,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728680382708653534,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9lfb2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9f5699-a31f-43bd-9cc8-96ce96a3c580,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3,PodSandboxId:d70591db6b44a6f62770b621c476bd4533fd39e586f6abbd0ef0ada1b90c891d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728680378440976180,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3064aeb-986a-48a2-9387-5a63fa2360bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a,PodSandboxId:ac050a37bd2a4f4cf1ad18a4c63478d98ff60d517e7301e067968c8111fa23d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728680375824733989,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-c8225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bfebaba-1d36-43d9-81be-28300ec9e5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345,PodSandboxId:a7702246bf4fd74f83c96aa582346e1fcc49772ca9b12add91470904f2ac897d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728680373367637158,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjszr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3663ee2-aeb3-4c62-a737-e095cc1897aa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a,PodSandboxId:c3d3376cc1c0d34d8c7a17f1c32e2c782501c34c6463bbf06cf145cb3432f4e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728680361748753724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3bbd1a87e260b26018493eafa545f11,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09,PodSandboxId:c262f51d7b83e1c584d215e8ca17eea777ce3ca60baf26b04c1a495709404c17,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728680361685952191,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9858946bfe7fcb9bb7388f72135b4b67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d6d5988caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed,PodSandboxId:04f67c71acadc83243909a3d3fa1555a03b79821ef2cc317885d26c95d33f15e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:C
ONTAINER_RUNNING,CreatedAt:1728680361696032105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38371230838b874d9394ce3526f4b9ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baf00379ca6ea029d36f772b21c41a194271522437c812390a73a49badabd64,PodSandboxId:3d278f46c6072eef2e59e27e8c1fbe8184f3c30b173cf0fdecb48947c95bf516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728680361629986941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96ee826ead82f064802f4fe0719de0ad,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bf127b6-6a16-4492-b5aa-4080bd0b062d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f5526765bdde       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   ef4cc3bb18ef7       hello-world-app-55bf9c44b4-h7nfv
	bc36dea0fbfd1       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   fb931a82e05b9       nginx
	209691f7026aa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   a82d83d3733b2       busybox
	77af1353666e7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   8 minutes ago       Running             metrics-server            0                   99742615d4255       metrics-server-84c5f94fbc-zmj4b
	f2b8178d0fe50       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                9 minutes ago       Running             amd-gpu-device-plugin     0                   4ee7875e4d7a1       amd-gpu-device-plugin-9lfb2
	f16d688ebd563       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        9 minutes ago       Running             storage-provisioner       0                   d70591db6b44a       storage-provisioner
	868280db0f1ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        9 minutes ago       Running             coredns                   0                   ac050a37bd2a4       coredns-7c65d6cfc9-c8225
	06f0c4117abe8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        9 minutes ago       Running             kube-proxy                0                   a7702246bf4fd       kube-proxy-pjszr
	30fc88697faa0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        9 minutes ago       Running             kube-scheduler            0                   c3d3376cc1c0d       kube-scheduler-addons-335640
	42d6d5988caa0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        9 minutes ago       Running             kube-apiserver            0                   04f67c71acadc       kube-apiserver-addons-335640
	6954d994f9340       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago       Running             etcd                      0                   c262f51d7b83e       etcd-addons-335640
	3baf00379ca6e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        9 minutes ago       Running             kube-controller-manager   0                   3d278f46c6072       kube-controller-manager-addons-335640
	
	
	==> coredns [868280db0f1ec92bdfcd9f1d47a78ebce2f3b5332c55acefc33e5555a3a57a2a] <==
	[INFO] 10.244.0.22:33840 - 39176 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094014s
	[INFO] 10.244.0.22:52656 - 56046 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035057s
	[INFO] 10.244.0.22:33840 - 57298 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007678s
	[INFO] 10.244.0.22:52656 - 22515 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036524s
	[INFO] 10.244.0.22:33840 - 59419 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075223s
	[INFO] 10.244.0.22:33840 - 57185 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000108046s
	[INFO] 10.244.0.22:52656 - 43740 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036464s
	[INFO] 10.244.0.22:33840 - 49360 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000101495s
	[INFO] 10.244.0.22:52656 - 45876 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040414s
	[INFO] 10.244.0.22:52656 - 40402 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041182s
	[INFO] 10.244.0.22:52656 - 25291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039994s
	[INFO] 10.244.0.22:44020 - 20958 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000114763s
	[INFO] 10.244.0.22:51838 - 17551 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004894s
	[INFO] 10.244.0.22:51838 - 33550 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034822s
	[INFO] 10.244.0.22:51838 - 23546 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028979s
	[INFO] 10.244.0.22:51838 - 61509 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065815s
	[INFO] 10.244.0.22:44020 - 48307 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000055714s
	[INFO] 10.244.0.22:51838 - 6207 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003437s
	[INFO] 10.244.0.22:51838 - 55360 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003808s
	[INFO] 10.244.0.22:44020 - 3611 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030568s
	[INFO] 10.244.0.22:44020 - 1856 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026974s
	[INFO] 10.244.0.22:44020 - 32601 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027378s
	[INFO] 10.244.0.22:51838 - 28920 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000183399s
	[INFO] 10.244.0.22:44020 - 51423 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000407419s
	[INFO] 10.244.0.22:44020 - 36996 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069465s
	
	
	==> describe nodes <==
	Name:               addons-335640
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-335640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=addons-335640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T20_59_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-335640
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 20:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-335640
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:08:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:06:04 +0000   Fri, 11 Oct 2024 20:59:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:06:04 +0000   Fri, 11 Oct 2024 20:59:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:06:04 +0000   Fri, 11 Oct 2024 20:59:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:06:04 +0000   Fri, 11 Oct 2024 20:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    addons-335640
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2e1823e29c4a7d9067bd8673ac97f7
	  System UUID:                8a2e1823-e29c-4a7d-9067-bd8673ac97f7
	  Boot ID:                    6a7e73ca-006d-4953-9110-3bc1a1eac562
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  default                     hello-world-app-55bf9c44b4-h7nfv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 amd-gpu-device-plugin-9lfb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 coredns-7c65d6cfc9-c8225                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m16s
	  kube-system                 etcd-addons-335640                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m22s
	  kube-system                 kube-apiserver-addons-335640             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 kube-controller-manager-addons-335640    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 kube-proxy-pjszr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-scheduler-addons-335640             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 metrics-server-84c5f94fbc-zmj4b          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m14s  kube-proxy       
	  Normal  Starting                 9m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m22s  kubelet          Node addons-335640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s  kubelet          Node addons-335640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s  kubelet          Node addons-335640 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m21s  kubelet          Node addons-335640 status is now: NodeReady
	  Normal  RegisteredNode           9m18s  node-controller  Node addons-335640 event: Registered Node addons-335640 in Controller
	
	
	==> dmesg <==
	[  +5.311309] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.178382] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.009019] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.039036] kauditd_printk_skb: 138 callbacks suppressed
	[  +9.120730] kauditd_printk_skb: 77 callbacks suppressed
	[Oct11 21:00] kauditd_printk_skb: 2 callbacks suppressed
	[ +23.841651] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.010961] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.213329] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.111618] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.230247] kauditd_printk_skb: 16 callbacks suppressed
	[Oct11 21:02] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.010586] kauditd_printk_skb: 9 callbacks suppressed
	[ +16.312988] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.262184] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.323065] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.606848] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.550661] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.036065] kauditd_printk_skb: 32 callbacks suppressed
	[Oct11 21:03] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.016046] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.124508] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.066527] kauditd_printk_skb: 38 callbacks suppressed
	[Oct11 21:05] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.041219] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [6954d994f9340662a1c2f97824c3586c5470505aca1890430e8de750f3a24f09] <==
	{"level":"warn","ts":"2024-10-11T21:00:26.924182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.198814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:26.925497Z","caller":"traceutil/trace.go:171","msg":"trace[1286683626] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:944; }","duration":"159.582372ms","start":"2024-10-11T21:00:26.765904Z","end":"2024-10-11T21:00:26.925486Z","steps":["trace[1286683626] 'agreement among raft nodes before linearized reading'  (duration: 158.182564ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:26.924238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.248308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:26.925636Z","caller":"traceutil/trace.go:171","msg":"trace[761550027] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:944; }","duration":"254.646251ms","start":"2024-10-11T21:00:26.670983Z","end":"2024-10-11T21:00:26.925630Z","steps":["trace[761550027] 'agreement among raft nodes before linearized reading'  (duration: 253.239263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:26.924259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.545454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:26.925747Z","caller":"traceutil/trace.go:171","msg":"trace[1910771772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:944; }","duration":"160.033279ms","start":"2024-10-11T21:00:26.765709Z","end":"2024-10-11T21:00:26.925742Z","steps":["trace[1910771772] 'agreement among raft nodes before linearized reading'  (duration: 158.540684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:38.282793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.145782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:38.283025Z","caller":"traceutil/trace.go:171","msg":"trace[819982128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:972; }","duration":"115.329195ms","start":"2024-10-11T21:00:38.167625Z","end":"2024-10-11T21:00:38.282954Z","steps":["trace[819982128] 'range keys from in-memory index tree'  (duration: 115.060979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T21:00:52.386712Z","caller":"traceutil/trace.go:171","msg":"trace[1290041847] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1097; }","duration":"385.180198ms","start":"2024-10-11T21:00:52.001496Z","end":"2024-10-11T21:00:52.386676Z","steps":["trace[1290041847] 'read index received'  (duration: 384.949736ms)","trace[1290041847] 'applied index is now lower than readState.Index'  (duration: 229.812µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-11T21:00:52.386887Z","caller":"traceutil/trace.go:171","msg":"trace[1758957571] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"396.382816ms","start":"2024-10-11T21:00:51.990490Z","end":"2024-10-11T21:00:52.386873Z","steps":["trace[1758957571] 'process raft request'  (duration: 396.011348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.387027Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T21:00:51.990465Z","time spent":"396.446406ms","remote":"127.0.0.1:59848","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3132,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" mod_revision:825 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > >"}
	{"level":"warn","ts":"2024-10-11T21:00:52.387219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.720059ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.387259Z","caller":"traceutil/trace.go:171","msg":"trace[1285946992] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1066; }","duration":"385.761737ms","start":"2024-10-11T21:00:52.001491Z","end":"2024-10-11T21:00:52.387253Z","steps":["trace[1285946992] 'agreement among raft nodes before linearized reading'  (duration: 385.685639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.388274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.852708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.388320Z","caller":"traceutil/trace.go:171","msg":"trace[387610853] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"301.033631ms","start":"2024-10-11T21:00:52.087278Z","end":"2024-10-11T21:00:52.388312Z","steps":["trace[387610853] 'agreement among raft nodes before linearized reading'  (duration: 300.823131ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.388774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.909212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.388967Z","caller":"traceutil/trace.go:171","msg":"trace[745617295] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"125.103791ms","start":"2024-10-11T21:00:52.263854Z","end":"2024-10-11T21:00:52.388958Z","steps":["trace[745617295] 'agreement among raft nodes before linearized reading'  (duration: 124.893165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.389698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T21:00:52.087244Z","time spent":"301.095278ms","remote":"127.0.0.1:59802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-11T21:00:52.390599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.002237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T21:00:52.390644Z","caller":"traceutil/trace.go:171","msg":"trace[1788118320] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"223.107299ms","start":"2024-10-11T21:00:52.167525Z","end":"2024-10-11T21:00:52.390632Z","steps":["trace[1788118320] 'agreement among raft nodes before linearized reading'  (duration: 222.982967ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:00:52.390740Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.702594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-11T21:00:52.390771Z","caller":"traceutil/trace.go:171","msg":"trace[1351951569] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1066; }","duration":"297.736282ms","start":"2024-10-11T21:00:52.093030Z","end":"2024-10-11T21:00:52.390766Z","steps":["trace[1351951569] 'agreement among raft nodes before linearized reading'  (duration: 297.687665ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T21:02:31.966992Z","caller":"traceutil/trace.go:171","msg":"trace[1706936483] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"400.269442ms","start":"2024-10-11T21:02:31.566693Z","end":"2024-10-11T21:02:31.966963Z","steps":["trace[1706936483] 'process raft request'  (duration: 399.905237ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T21:02:31.967299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-11T21:02:31.566679Z","time spent":"400.386226ms","remote":"127.0.0.1:59788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1385 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-11T21:02:50.230077Z","caller":"traceutil/trace.go:171","msg":"trace[198283386] transaction","detail":"{read_only:false; response_revision:1547; number_of_response:1; }","duration":"103.967802ms","start":"2024-10-11T21:02:50.126082Z","end":"2024-10-11T21:02:50.230050Z","steps":["trace[198283386] 'process raft request'  (duration: 103.53851ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:08:48 up 9 min,  0 users,  load average: 0.16, 0.54, 0.45
	Linux addons-335640 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42d6d5988caa01eff6097bda1724bdc0c935ca5d2cb982d4b31e9c795a8ba6ed] <==
	I1011 21:01:18.057398       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1011 21:02:17.505464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.109:8443->192.168.39.1:36014: use of closed network connection
	E1011 21:02:17.690100       1 conn.go:339] Error on socket receive: read tcp 192.168.39.109:8443->192.168.39.1:36044: use of closed network connection
	I1011 21:02:26.875269       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.37.133"}
	I1011 21:02:58.399618       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1011 21:03:02.488746       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1011 21:03:02.696293       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1011 21:03:03.733947       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1011 21:03:08.159090       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1011 21:03:08.345354       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.63.223"}
	I1011 21:03:21.424208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.424514       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.441085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.441877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.473315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.473374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.507913       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.507942       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1011 21:03:21.574277       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1011 21:03:21.574827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1011 21:03:22.507933       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1011 21:03:22.574717       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1011 21:03:22.608383       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1011 21:05:28.742645       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.233.141"}
	E1011 21:05:32.263222       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [3baf00379ca6ea029d36f772b21c41a194271522437c812390a73a49badabd64] <==
	E1011 21:06:17.784985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:06:30.072255       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:30.072433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:06:49.051800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:49.051891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:06:56.493855       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:06:56.493997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:05.379953       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:05.380054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:19.518031       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:19.518244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:34.865796       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:34.865864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:42.392389       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:42.392474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:07:43.617613       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:07:43.617715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:08:17.990391       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:08:17.990639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:08:31.420073       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:08:31.420288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:08:34.207038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:08:34.207272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1011 21:08:37.067755       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1011 21:08:37.067821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [06f0c4117abe8b04a61afe38b35c539552805dab8442a9881c3753cc0eb44345] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 20:59:34.225544       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 20:59:34.248046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.109"]
	E1011 20:59:34.248108       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 20:59:34.373535       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 20:59:34.373571       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 20:59:34.373592       1 server_linux.go:169] "Using iptables Proxier"
	I1011 20:59:34.385354       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 20:59:34.385687       1 server.go:483] "Version info" version="v1.31.1"
	I1011 20:59:34.385699       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 20:59:34.386894       1 config.go:199] "Starting service config controller"
	I1011 20:59:34.386909       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 20:59:34.386954       1 config.go:105] "Starting endpoint slice config controller"
	I1011 20:59:34.386960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 20:59:34.392669       1 config.go:328] "Starting node config controller"
	I1011 20:59:34.392678       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 20:59:34.487297       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 20:59:34.487334       1 shared_informer.go:320] Caches are synced for service config
	I1011 20:59:34.497208       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [30fc88697faa06c3e9f1c9a92748a78492278f83f3e6d5cce977fad8e86d3f0a] <==
	W1011 20:59:24.275497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:59:24.275623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:24.275858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:59:24.275948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.107830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 20:59:25.108285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.141780       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 20:59:25.141829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.142866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 20:59:25.142991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.223560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 20:59:25.223647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.340311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:59:25.340374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.393256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 20:59:25.393338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.393999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 20:59:25.394035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.429653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 20:59:25.429709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.482522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:59:25.482949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:59:25.545799       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 20:59:25.545857       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 20:59:28.765197       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 21:07:26 addons-335640 kubelet[1205]: E1011 21:07:26.957046    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680846956535459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:26 addons-335640 kubelet[1205]: E1011 21:07:26.957202    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680846956535459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:36 addons-335640 kubelet[1205]: E1011 21:07:36.960455    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680856959836170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:36 addons-335640 kubelet[1205]: E1011 21:07:36.960751    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680856959836170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:46 addons-335640 kubelet[1205]: E1011 21:07:46.963722    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680866963312936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:46 addons-335640 kubelet[1205]: E1011 21:07:46.964016    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680866963312936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:56 addons-335640 kubelet[1205]: E1011 21:07:56.967443    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680876967003634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:07:56 addons-335640 kubelet[1205]: E1011 21:07:56.967783    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680876967003634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:06 addons-335640 kubelet[1205]: E1011 21:08:06.971052    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680886970644011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:06 addons-335640 kubelet[1205]: E1011 21:08:06.971082    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680886970644011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:16 addons-335640 kubelet[1205]: E1011 21:08:16.973441    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680896973025764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:16 addons-335640 kubelet[1205]: E1011 21:08:16.973501    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680896973025764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:20 addons-335640 kubelet[1205]: I1011 21:08:20.678741    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9lfb2" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:08:26 addons-335640 kubelet[1205]: E1011 21:08:26.698252    1205 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 21:08:26 addons-335640 kubelet[1205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 21:08:26 addons-335640 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 21:08:26 addons-335640 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:08:26 addons-335640 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:08:26 addons-335640 kubelet[1205]: E1011 21:08:26.976061    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680906975666689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:26 addons-335640 kubelet[1205]: E1011 21:08:26.976086    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680906975666689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:34 addons-335640 kubelet[1205]: I1011 21:08:34.678492    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:08:36 addons-335640 kubelet[1205]: E1011 21:08:36.980320    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680916979632168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:36 addons-335640 kubelet[1205]: E1011 21:08:36.980591    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680916979632168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:46 addons-335640 kubelet[1205]: E1011 21:08:46.984037    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680926983354477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:08:46 addons-335640 kubelet[1205]: E1011 21:08:46.984124    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728680926983354477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f16d688ebd563c6a60e182f133298e81c1c13356383b97b40b0ad9b06caeb9a3] <==
	I1011 20:59:38.953432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 20:59:38.969095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 20:59:38.974933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 20:59:39.214845       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 20:59:39.215082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-335640_06a0458b-e9a7-427f-8eb5-60771a3be0aa!
	I1011 20:59:39.216847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26402892-5c04-4086-86d6-b40d74399051", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-335640_06a0458b-e9a7-427f-8eb5-60771a3be0aa became leader
	I1011 20:59:39.633064       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-335640_06a0458b-e9a7-427f-8eb5-60771a3be0aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335640 -n addons-335640
helpers_test.go:261: (dbg) Run:  kubectl --context addons-335640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (357.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-335640
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-335640: exit status 82 (2m0.462294776s)

                                                
                                                
-- stdout --
	* Stopping node "addons-335640"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-335640" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-335640
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-335640: exit status 11 (21.611484625s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-335640" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-335640
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-335640: exit status 11 (6.14427695s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-335640" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-335640
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-335640: exit status 11 (6.142401515s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-335640" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 node stop m02 -v=7 --alsologtostderr
E1011 21:21:05.467577   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:21:46.429065   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:22:06.383017   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-610874 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.456597055s)

                                                
                                                
-- stdout --
	* Stopping node "ha-610874-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:21:00.202197   34111 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:21:00.202501   34111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:21:00.202514   34111 out.go:358] Setting ErrFile to fd 2...
	I1011 21:21:00.202518   34111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:21:00.202762   34111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:21:00.202991   34111 mustload.go:65] Loading cluster: ha-610874
	I1011 21:21:00.203454   34111 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:21:00.203474   34111 stop.go:39] StopHost: ha-610874-m02
	I1011 21:21:00.204005   34111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:21:00.204055   34111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:21:00.219214   34111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I1011 21:21:00.219716   34111 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:21:00.220241   34111 main.go:141] libmachine: Using API Version  1
	I1011 21:21:00.220268   34111 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:21:00.220574   34111 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:21:00.223161   34111 out.go:177] * Stopping node "ha-610874-m02"  ...
	I1011 21:21:00.224374   34111 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 21:21:00.224396   34111 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:21:00.224577   34111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 21:21:00.224599   34111 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:21:00.226941   34111 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:21:00.227330   34111 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:21:00.227369   34111 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:21:00.227485   34111 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:21:00.227605   34111 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:21:00.227736   34111 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:21:00.227872   34111 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:21:00.315895   34111 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 21:21:00.369613   34111 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 21:21:00.423458   34111 main.go:141] libmachine: Stopping "ha-610874-m02"...
	I1011 21:21:00.423483   34111 main.go:141] libmachine: (ha-610874-m02) Calling .GetState
	I1011 21:21:00.425064   34111 main.go:141] libmachine: (ha-610874-m02) Calling .Stop
	I1011 21:21:00.428367   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 0/120
	I1011 21:21:01.429643   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 1/120
	I1011 21:21:02.430832   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 2/120
	I1011 21:21:03.432902   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 3/120
	I1011 21:21:04.434394   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 4/120
	I1011 21:21:05.436345   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 5/120
	I1011 21:21:06.437664   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 6/120
	I1011 21:21:07.439121   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 7/120
	I1011 21:21:08.440851   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 8/120
	I1011 21:21:09.442159   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 9/120
	I1011 21:21:10.444429   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 10/120
	I1011 21:21:11.445775   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 11/120
	I1011 21:21:12.447130   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 12/120
	I1011 21:21:13.449216   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 13/120
	I1011 21:21:14.450649   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 14/120
	I1011 21:21:15.452952   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 15/120
	I1011 21:21:16.454102   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 16/120
	I1011 21:21:17.455564   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 17/120
	I1011 21:21:18.457061   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 18/120
	I1011 21:21:19.458205   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 19/120
	I1011 21:21:20.460202   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 20/120
	I1011 21:21:21.461440   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 21/120
	I1011 21:21:22.462675   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 22/120
	I1011 21:21:23.463840   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 23/120
	I1011 21:21:24.465146   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 24/120
	I1011 21:21:25.466808   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 25/120
	I1011 21:21:26.468171   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 26/120
	I1011 21:21:27.469846   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 27/120
	I1011 21:21:28.471253   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 28/120
	I1011 21:21:29.472622   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 29/120
	I1011 21:21:30.474684   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 30/120
	I1011 21:21:31.475950   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 31/120
	I1011 21:21:32.477215   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 32/120
	I1011 21:21:33.478540   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 33/120
	I1011 21:21:34.479937   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 34/120
	I1011 21:21:35.481807   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 35/120
	I1011 21:21:36.483109   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 36/120
	I1011 21:21:37.484665   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 37/120
	I1011 21:21:38.486325   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 38/120
	I1011 21:21:39.487665   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 39/120
	I1011 21:21:40.489734   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 40/120
	I1011 21:21:41.490949   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 41/120
	I1011 21:21:42.493110   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 42/120
	I1011 21:21:43.495530   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 43/120
	I1011 21:21:44.497059   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 44/120
	I1011 21:21:45.498763   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 45/120
	I1011 21:21:46.501222   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 46/120
	I1011 21:21:47.502534   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 47/120
	I1011 21:21:48.504338   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 48/120
	I1011 21:21:49.505665   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 49/120
	I1011 21:21:50.507724   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 50/120
	I1011 21:21:51.508941   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 51/120
	I1011 21:21:52.510649   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 52/120
	I1011 21:21:53.512113   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 53/120
	I1011 21:21:54.513299   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 54/120
	I1011 21:21:55.515172   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 55/120
	I1011 21:21:56.517049   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 56/120
	I1011 21:21:57.518315   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 57/120
	I1011 21:21:58.519623   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 58/120
	I1011 21:21:59.520999   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 59/120
	I1011 21:22:00.522486   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 60/120
	I1011 21:22:01.524011   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 61/120
	I1011 21:22:02.525158   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 62/120
	I1011 21:22:03.526458   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 63/120
	I1011 21:22:04.527650   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 64/120
	I1011 21:22:05.528869   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 65/120
	I1011 21:22:06.530336   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 66/120
	I1011 21:22:07.531533   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 67/120
	I1011 21:22:08.533110   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 68/120
	I1011 21:22:09.535059   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 69/120
	I1011 21:22:10.536919   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 70/120
	I1011 21:22:11.538395   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 71/120
	I1011 21:22:12.539699   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 72/120
	I1011 21:22:13.540935   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 73/120
	I1011 21:22:14.542221   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 74/120
	I1011 21:22:15.543721   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 75/120
	I1011 21:22:16.544947   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 76/120
	I1011 21:22:17.546288   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 77/120
	I1011 21:22:18.548100   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 78/120
	I1011 21:22:19.549466   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 79/120
	I1011 21:22:20.551431   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 80/120
	I1011 21:22:21.553546   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 81/120
	I1011 21:22:22.555504   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 82/120
	I1011 21:22:23.557048   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 83/120
	I1011 21:22:24.558890   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 84/120
	I1011 21:22:25.560593   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 85/120
	I1011 21:22:26.561712   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 86/120
	I1011 21:22:27.562871   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 87/120
	I1011 21:22:28.565026   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 88/120
	I1011 21:22:29.566197   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 89/120
	I1011 21:22:30.568071   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 90/120
	I1011 21:22:31.569187   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 91/120
	I1011 21:22:32.570846   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 92/120
	I1011 21:22:33.572871   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 93/120
	I1011 21:22:34.574244   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 94/120
	I1011 21:22:35.576054   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 95/120
	I1011 21:22:36.577417   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 96/120
	I1011 21:22:37.578604   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 97/120
	I1011 21:22:38.579952   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 98/120
	I1011 21:22:39.581533   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 99/120
	I1011 21:22:40.583667   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 100/120
	I1011 21:22:41.585023   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 101/120
	I1011 21:22:42.586206   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 102/120
	I1011 21:22:43.587661   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 103/120
	I1011 21:22:44.588925   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 104/120
	I1011 21:22:45.590692   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 105/120
	I1011 21:22:46.591992   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 106/120
	I1011 21:22:47.593171   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 107/120
	I1011 21:22:48.594329   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 108/120
	I1011 21:22:49.595624   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 109/120
	I1011 21:22:50.597467   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 110/120
	I1011 21:22:51.598701   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 111/120
	I1011 21:22:52.599899   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 112/120
	I1011 21:22:53.602263   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 113/120
	I1011 21:22:54.603620   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 114/120
	I1011 21:22:55.605441   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 115/120
	I1011 21:22:56.606610   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 116/120
	I1011 21:22:57.608749   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 117/120
	I1011 21:22:58.610825   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 118/120
	I1011 21:22:59.612983   34111 main.go:141] libmachine: (ha-610874-m02) Waiting for machine to stop 119/120
	I1011 21:23:00.614044   34111 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1011 21:23:00.614211   34111 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-610874 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
E1011 21:23:08.350770   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr: (18.632839453s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-610874 -n ha-610874
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 logs -n 25: (1.448476182s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m03_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m04 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp testdata/cp-test.txt                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m03 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-610874 node stop m02 -v=7                                                     | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:16:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:16:16.315983   29617 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:16:16.316246   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316256   29617 out.go:358] Setting ErrFile to fd 2...
	I1011 21:16:16.316260   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316440   29617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:16:16.316986   29617 out.go:352] Setting JSON to false
	I1011 21:16:16.317794   29617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3521,"bootTime":1728677855,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:16:16.317891   29617 start.go:139] virtualization: kvm guest
	I1011 21:16:16.320541   29617 out.go:177] * [ha-610874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:16:16.321962   29617 notify.go:220] Checking for updates...
	I1011 21:16:16.321994   29617 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:16:16.323197   29617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:16:16.324431   29617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:16:16.325803   29617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.326998   29617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:16:16.328308   29617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:16:16.329813   29617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:16:16.364781   29617 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 21:16:16.366005   29617 start.go:297] selected driver: kvm2
	I1011 21:16:16.366018   29617 start.go:901] validating driver "kvm2" against <nil>
	I1011 21:16:16.366031   29617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:16:16.366752   29617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.366844   29617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:16:16.382125   29617 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:16:16.382207   29617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 21:16:16.382499   29617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:16:16.382537   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:16.382594   29617 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1011 21:16:16.382605   29617 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 21:16:16.382687   29617 start.go:340] cluster config:
	{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:16.382807   29617 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.384631   29617 out.go:177] * Starting "ha-610874" primary control-plane node in "ha-610874" cluster
	I1011 21:16:16.385929   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:16.385976   29617 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:16:16.385989   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:16:16.386070   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:16:16.386083   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:16:16.386381   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:16.386407   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json: {Name:mk126d2587705783f49cefd5532c6478d010ac07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:16.386555   29617 start.go:360] acquireMachinesLock for ha-610874: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:16:16.386593   29617 start.go:364] duration metric: took 23.105µs to acquireMachinesLock for "ha-610874"
	I1011 21:16:16.386631   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:16:16.386695   29617 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 21:16:16.388125   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:16:16.388266   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:16:16.388308   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:16:16.402198   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I1011 21:16:16.402701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:16:16.403193   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:16:16.403238   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:16:16.403629   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:16:16.403831   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:16.403987   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:16.404130   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:16:16.404153   29617 client.go:168] LocalClient.Create starting
	I1011 21:16:16.404179   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:16:16.404207   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404220   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404273   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:16:16.404296   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404309   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404323   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:16:16.404331   29617 main.go:141] libmachine: (ha-610874) Calling .PreCreateCheck
	I1011 21:16:16.404634   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:16.404967   29617 main.go:141] libmachine: Creating machine...
	I1011 21:16:16.404978   29617 main.go:141] libmachine: (ha-610874) Calling .Create
	I1011 21:16:16.405091   29617 main.go:141] libmachine: (ha-610874) Creating KVM machine...
	I1011 21:16:16.406548   29617 main.go:141] libmachine: (ha-610874) DBG | found existing default KVM network
	I1011 21:16:16.407330   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.407180   29640 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1011 21:16:16.407350   29617 main.go:141] libmachine: (ha-610874) DBG | created network xml: 
	I1011 21:16:16.407362   29617 main.go:141] libmachine: (ha-610874) DBG | <network>
	I1011 21:16:16.407369   29617 main.go:141] libmachine: (ha-610874) DBG |   <name>mk-ha-610874</name>
	I1011 21:16:16.407378   29617 main.go:141] libmachine: (ha-610874) DBG |   <dns enable='no'/>
	I1011 21:16:16.407386   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407396   29617 main.go:141] libmachine: (ha-610874) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1011 21:16:16.407401   29617 main.go:141] libmachine: (ha-610874) DBG |     <dhcp>
	I1011 21:16:16.407430   29617 main.go:141] libmachine: (ha-610874) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1011 21:16:16.407460   29617 main.go:141] libmachine: (ha-610874) DBG |     </dhcp>
	I1011 21:16:16.407476   29617 main.go:141] libmachine: (ha-610874) DBG |   </ip>
	I1011 21:16:16.407485   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407492   29617 main.go:141] libmachine: (ha-610874) DBG | </network>
	I1011 21:16:16.407498   29617 main.go:141] libmachine: (ha-610874) DBG | 
	I1011 21:16:16.412623   29617 main.go:141] libmachine: (ha-610874) DBG | trying to create private KVM network mk-ha-610874 192.168.39.0/24...
	I1011 21:16:16.475097   29617 main.go:141] libmachine: (ha-610874) DBG | private KVM network mk-ha-610874 192.168.39.0/24 created
	I1011 21:16:16.475123   29617 main.go:141] libmachine: (ha-610874) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.475147   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.475097   29640 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.475159   29617 main.go:141] libmachine: (ha-610874) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:16:16.475241   29617 main.go:141] libmachine: (ha-610874) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:16:16.729125   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.729005   29640 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa...
	I1011 21:16:16.910019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.909910   29640 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk...
	I1011 21:16:16.910047   29617 main.go:141] libmachine: (ha-610874) DBG | Writing magic tar header
	I1011 21:16:16.910056   29617 main.go:141] libmachine: (ha-610874) DBG | Writing SSH key tar header
	I1011 21:16:16.910063   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.910020   29640 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.910136   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874
	I1011 21:16:16.910176   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 (perms=drwx------)
	I1011 21:16:16.910191   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:16:16.910200   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:16:16.910207   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.910225   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:16:16.910242   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:16:16.910260   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:16:16.910277   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:16:16.910286   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:16:16.910293   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:16:16.910306   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:16.910328   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:16:16.910345   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home
	I1011 21:16:16.910356   29617 main.go:141] libmachine: (ha-610874) DBG | Skipping /home - not owner
	I1011 21:16:16.911372   29617 main.go:141] libmachine: (ha-610874) define libvirt domain using xml: 
	I1011 21:16:16.911391   29617 main.go:141] libmachine: (ha-610874) <domain type='kvm'>
	I1011 21:16:16.911398   29617 main.go:141] libmachine: (ha-610874)   <name>ha-610874</name>
	I1011 21:16:16.911402   29617 main.go:141] libmachine: (ha-610874)   <memory unit='MiB'>2200</memory>
	I1011 21:16:16.911407   29617 main.go:141] libmachine: (ha-610874)   <vcpu>2</vcpu>
	I1011 21:16:16.911412   29617 main.go:141] libmachine: (ha-610874)   <features>
	I1011 21:16:16.911418   29617 main.go:141] libmachine: (ha-610874)     <acpi/>
	I1011 21:16:16.911425   29617 main.go:141] libmachine: (ha-610874)     <apic/>
	I1011 21:16:16.911430   29617 main.go:141] libmachine: (ha-610874)     <pae/>
	I1011 21:16:16.911442   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911451   29617 main.go:141] libmachine: (ha-610874)   </features>
	I1011 21:16:16.911459   29617 main.go:141] libmachine: (ha-610874)   <cpu mode='host-passthrough'>
	I1011 21:16:16.911467   29617 main.go:141] libmachine: (ha-610874)   
	I1011 21:16:16.911473   29617 main.go:141] libmachine: (ha-610874)   </cpu>
	I1011 21:16:16.911479   29617 main.go:141] libmachine: (ha-610874)   <os>
	I1011 21:16:16.911484   29617 main.go:141] libmachine: (ha-610874)     <type>hvm</type>
	I1011 21:16:16.911489   29617 main.go:141] libmachine: (ha-610874)     <boot dev='cdrom'/>
	I1011 21:16:16.911492   29617 main.go:141] libmachine: (ha-610874)     <boot dev='hd'/>
	I1011 21:16:16.911498   29617 main.go:141] libmachine: (ha-610874)     <bootmenu enable='no'/>
	I1011 21:16:16.911504   29617 main.go:141] libmachine: (ha-610874)   </os>
	I1011 21:16:16.911510   29617 main.go:141] libmachine: (ha-610874)   <devices>
	I1011 21:16:16.911516   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='cdrom'>
	I1011 21:16:16.911532   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/boot2docker.iso'/>
	I1011 21:16:16.911547   29617 main.go:141] libmachine: (ha-610874)       <target dev='hdc' bus='scsi'/>
	I1011 21:16:16.911568   29617 main.go:141] libmachine: (ha-610874)       <readonly/>
	I1011 21:16:16.911586   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911596   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='disk'>
	I1011 21:16:16.911605   29617 main.go:141] libmachine: (ha-610874)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:16:16.911637   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk'/>
	I1011 21:16:16.911655   29617 main.go:141] libmachine: (ha-610874)       <target dev='hda' bus='virtio'/>
	I1011 21:16:16.911674   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911692   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911700   29617 main.go:141] libmachine: (ha-610874)       <source network='mk-ha-610874'/>
	I1011 21:16:16.911705   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911709   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911713   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911719   29617 main.go:141] libmachine: (ha-610874)       <source network='default'/>
	I1011 21:16:16.911726   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911730   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911736   29617 main.go:141] libmachine: (ha-610874)     <serial type='pty'>
	I1011 21:16:16.911741   29617 main.go:141] libmachine: (ha-610874)       <target port='0'/>
	I1011 21:16:16.911745   29617 main.go:141] libmachine: (ha-610874)     </serial>
	I1011 21:16:16.911751   29617 main.go:141] libmachine: (ha-610874)     <console type='pty'>
	I1011 21:16:16.911757   29617 main.go:141] libmachine: (ha-610874)       <target type='serial' port='0'/>
	I1011 21:16:16.911762   29617 main.go:141] libmachine: (ha-610874)     </console>
	I1011 21:16:16.911771   29617 main.go:141] libmachine: (ha-610874)     <rng model='virtio'>
	I1011 21:16:16.911795   29617 main.go:141] libmachine: (ha-610874)       <backend model='random'>/dev/random</backend>
	I1011 21:16:16.911810   29617 main.go:141] libmachine: (ha-610874)     </rng>
	I1011 21:16:16.911818   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911827   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911835   29617 main.go:141] libmachine: (ha-610874)   </devices>
	I1011 21:16:16.911844   29617 main.go:141] libmachine: (ha-610874) </domain>
	I1011 21:16:16.911853   29617 main.go:141] libmachine: (ha-610874) 
	I1011 21:16:16.916111   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:3e:bc:a1 in network default
	I1011 21:16:16.916699   29617 main.go:141] libmachine: (ha-610874) Ensuring networks are active...
	I1011 21:16:16.916720   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:16.917266   29617 main.go:141] libmachine: (ha-610874) Ensuring network default is active
	I1011 21:16:16.917528   29617 main.go:141] libmachine: (ha-610874) Ensuring network mk-ha-610874 is active
	I1011 21:16:16.918196   29617 main.go:141] libmachine: (ha-610874) Getting domain xml...
	I1011 21:16:16.918917   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:18.090043   29617 main.go:141] libmachine: (ha-610874) Waiting to get IP...
	I1011 21:16:18.090745   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.091141   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.091169   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.091121   29640 retry.go:31] will retry after 201.066044ms: waiting for machine to come up
	I1011 21:16:18.293473   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.293939   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.293961   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.293905   29640 retry.go:31] will retry after 378.868503ms: waiting for machine to come up
	I1011 21:16:18.674665   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.675080   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.675111   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.675034   29640 retry.go:31] will retry after 485.059913ms: waiting for machine to come up
	I1011 21:16:19.161402   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.161817   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.161841   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.161779   29640 retry.go:31] will retry after 597.34397ms: waiting for machine to come up
	I1011 21:16:19.760468   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.761020   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.761049   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.760968   29640 retry.go:31] will retry after 563.860814ms: waiting for machine to come up
	I1011 21:16:20.326631   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:20.326999   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:20.327019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:20.326975   29640 retry.go:31] will retry after 723.522472ms: waiting for machine to come up
	I1011 21:16:21.051775   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:21.052216   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:21.052252   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:21.052167   29640 retry.go:31] will retry after 1.08960891s: waiting for machine to come up
	I1011 21:16:22.142962   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:22.143401   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:22.143426   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:22.143368   29640 retry.go:31] will retry after 897.228253ms: waiting for machine to come up
	I1011 21:16:23.042418   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:23.042804   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:23.042830   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:23.042766   29640 retry.go:31] will retry after 1.598924345s: waiting for machine to come up
	I1011 21:16:24.643409   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:24.643801   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:24.643824   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:24.643752   29640 retry.go:31] will retry after 2.213754576s: waiting for machine to come up
	I1011 21:16:26.858883   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:26.859262   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:26.859288   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:26.859206   29640 retry.go:31] will retry after 2.657896821s: waiting for machine to come up
	I1011 21:16:29.518223   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:29.518660   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:29.518685   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:29.518604   29640 retry.go:31] will retry after 3.090933093s: waiting for machine to come up
	I1011 21:16:32.611083   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:32.611504   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:32.611526   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:32.611439   29640 retry.go:31] will retry after 4.256728144s: waiting for machine to come up
	I1011 21:16:36.869470   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.869869   29617 main.go:141] libmachine: (ha-610874) Found IP for machine: 192.168.39.10
	I1011 21:16:36.869889   29617 main.go:141] libmachine: (ha-610874) Reserving static IP address...
	I1011 21:16:36.869901   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has current primary IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.870189   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "ha-610874", mac: "52:54:00:5f:c7:da", ip: "192.168.39.10"} in network mk-ha-610874
	I1011 21:16:36.939387   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:36.939416   29617 main.go:141] libmachine: (ha-610874) Reserved static IP address: 192.168.39.10
	I1011 21:16:36.939452   29617 main.go:141] libmachine: (ha-610874) Waiting for SSH to be available...
	I1011 21:16:36.941715   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.941968   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874
	I1011 21:16:36.941981   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:5f:c7:da
	I1011 21:16:36.942096   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:36.942140   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:36.942184   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:36.942200   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:36.942220   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:36.945904   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:16:36.945918   29617 main.go:141] libmachine: (ha-610874) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:16:36.945924   29617 main.go:141] libmachine: (ha-610874) DBG | command : exit 0
	I1011 21:16:36.945937   29617 main.go:141] libmachine: (ha-610874) DBG | err     : exit status 255
	I1011 21:16:36.945943   29617 main.go:141] libmachine: (ha-610874) DBG | output  : 
	I1011 21:16:39.948099   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:39.950401   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950756   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:39.950783   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950892   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:39.950914   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:39.950953   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:39.950970   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:39.950994   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:40.078944   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: <nil>: 
	I1011 21:16:40.079215   29617 main.go:141] libmachine: (ha-610874) KVM machine creation complete!
	I1011 21:16:40.079553   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:40.080090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080284   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080465   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:16:40.080487   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:16:40.081981   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:16:40.081998   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:16:40.082006   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:16:40.082015   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.084298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084628   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.084651   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084818   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.084959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085094   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085224   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.085388   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.085639   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.085653   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:16:40.198146   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.198167   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:16:40.198175   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.200910   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201288   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.201309   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201507   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.201664   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.201836   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.202076   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.202254   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.202419   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.202429   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:16:40.320067   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:16:40.320126   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:16:40.320134   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:16:40.320143   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320383   29617 buildroot.go:166] provisioning hostname "ha-610874"
	I1011 21:16:40.320406   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320566   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.322841   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323123   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.323151   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323298   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.323462   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323604   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323710   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.323847   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.324007   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.324018   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874 && echo "ha-610874" | sudo tee /etc/hostname
	I1011 21:16:40.453038   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:16:40.453062   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.455945   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456318   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.456341   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456518   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.456721   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456849   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.457152   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.457380   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.457403   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:16:40.579667   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.579694   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:16:40.579712   29617 buildroot.go:174] setting up certificates
	I1011 21:16:40.579722   29617 provision.go:84] configureAuth start
	I1011 21:16:40.579730   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.579972   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:40.582609   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.582944   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.582970   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.583046   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.585314   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585630   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.585652   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585815   29617 provision.go:143] copyHostCerts
	I1011 21:16:40.585854   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585886   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:16:40.585905   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585976   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:16:40.586075   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586099   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:16:40.586109   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586148   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:16:40.586259   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586280   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:16:40.586286   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586312   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:16:40.586375   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874 san=[127.0.0.1 192.168.39.10 ha-610874 localhost minikube]
	I1011 21:16:40.739496   29617 provision.go:177] copyRemoteCerts
	I1011 21:16:40.739549   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:16:40.739572   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.742211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742512   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.742540   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742690   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.742858   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.743050   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.743333   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:40.830053   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:16:40.830129   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:16:40.854808   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:16:40.854871   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:16:40.878779   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:16:40.878844   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1011 21:16:40.903681   29617 provision.go:87] duration metric: took 323.94786ms to configureAuth
	I1011 21:16:40.903706   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:16:40.903876   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:16:40.903945   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.906420   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906781   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.906802   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906980   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.907177   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907312   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907417   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.907537   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.907709   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.907729   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:16:41.149826   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:16:41.149854   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:16:41.149864   29617 main.go:141] libmachine: (ha-610874) Calling .GetURL
	I1011 21:16:41.151110   29617 main.go:141] libmachine: (ha-610874) DBG | Using libvirt version 6000000
	I1011 21:16:41.153298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153626   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.153645   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153813   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:16:41.153832   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:16:41.153840   29617 client.go:171] duration metric: took 24.749677896s to LocalClient.Create
	I1011 21:16:41.153864   29617 start.go:167] duration metric: took 24.749734503s to libmachine.API.Create "ha-610874"
	I1011 21:16:41.153877   29617 start.go:293] postStartSetup for "ha-610874" (driver="kvm2")
	I1011 21:16:41.153888   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:16:41.153907   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.154134   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:16:41.154156   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.156353   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156731   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.156764   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.157060   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.157197   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.157377   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.245691   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:16:41.249882   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:16:41.249905   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:16:41.249959   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:16:41.250032   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:16:41.250041   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:16:41.250126   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:16:41.259595   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:41.283193   29617 start.go:296] duration metric: took 129.282074ms for postStartSetup
	I1011 21:16:41.283237   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:41.283845   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.286641   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.286965   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.286993   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.287545   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:41.287766   29617 start.go:128] duration metric: took 24.901059572s to createHost
	I1011 21:16:41.287798   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.290002   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290466   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.290494   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290571   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.290756   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.290937   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.291088   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.291234   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:41.291438   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:41.291450   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:16:41.403429   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681401.368525171
	
	I1011 21:16:41.403454   29617 fix.go:216] guest clock: 1728681401.368525171
	I1011 21:16:41.403464   29617 fix.go:229] Guest: 2024-10-11 21:16:41.368525171 +0000 UTC Remote: 2024-10-11 21:16:41.287784391 +0000 UTC m=+25.009627787 (delta=80.74078ms)
	I1011 21:16:41.403482   29617 fix.go:200] guest clock delta is within tolerance: 80.74078ms
	I1011 21:16:41.403487   29617 start.go:83] releasing machines lock for "ha-610874", held for 25.016883267s
	I1011 21:16:41.403504   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.403754   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.406243   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406536   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.406580   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406719   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407201   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407373   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407483   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:16:41.407533   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.407566   29617 ssh_runner.go:195] Run: cat /version.json
	I1011 21:16:41.407594   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.409924   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410186   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410232   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410307   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.410474   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.410626   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.410667   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410689   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410822   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.410885   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.411000   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.411159   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.411313   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.492040   29617 ssh_runner.go:195] Run: systemctl --version
	I1011 21:16:41.526227   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:16:41.684068   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:16:41.690188   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:16:41.690243   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:16:41.709475   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:16:41.709500   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:16:41.709563   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:16:41.725364   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:16:41.739326   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:16:41.739404   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:16:41.753640   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:16:41.767723   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:16:41.878060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:16:42.036051   29617 docker.go:233] disabling docker service ...
	I1011 21:16:42.036136   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:16:42.051987   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:16:42.065946   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:16:42.197199   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:16:42.333061   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:16:42.346878   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:16:42.365538   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:16:42.365592   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.375884   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:16:42.375943   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.386250   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.396765   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.407109   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:16:42.417549   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.427975   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.446147   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.456868   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:16:42.466165   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:16:42.466232   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:16:42.479799   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:16:42.489557   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:42.623905   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:16:42.716796   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:16:42.716871   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:16:42.721858   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:16:42.721918   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:16:42.725704   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:16:42.764981   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:16:42.765051   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.793072   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.822676   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:16:42.824024   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:42.826801   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827112   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:42.827137   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827350   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:16:42.831498   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:42.845346   29617 kubeadm.go:883] updating cluster {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:16:42.845519   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:42.845589   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:42.883957   29617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 21:16:42.884036   29617 ssh_runner.go:195] Run: which lz4
	I1011 21:16:42.888030   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1011 21:16:42.888109   29617 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 21:16:42.892241   29617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 21:16:42.892274   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 21:16:44.230363   29617 crio.go:462] duration metric: took 1.342272134s to copy over tarball
	I1011 21:16:44.230455   29617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 21:16:46.214291   29617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.983794178s)
	I1011 21:16:46.214315   29617 crio.go:469] duration metric: took 1.983922074s to extract the tarball
	I1011 21:16:46.214323   29617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 21:16:46.250833   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:46.298082   29617 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:16:46.298105   29617 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:16:46.298113   29617 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1011 21:16:46.298286   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:16:46.298384   29617 ssh_runner.go:195] Run: crio config
	I1011 21:16:46.343467   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:46.343493   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:46.343504   29617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:16:46.343528   29617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-610874 NodeName:ha-610874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:16:46.343703   29617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-610874"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:16:46.343730   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:16:46.343782   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:16:46.359672   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:16:46.359783   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:16:46.359850   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:16:46.370362   29617 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:16:46.370421   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1011 21:16:46.380573   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1011 21:16:46.396912   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:16:46.413759   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1011 21:16:46.430823   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1011 21:16:46.447531   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:16:46.451423   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:46.463809   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:46.584169   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:16:46.602286   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.10
	I1011 21:16:46.602304   29617 certs.go:194] generating shared ca certs ...
	I1011 21:16:46.602322   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.602467   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:16:46.602520   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:16:46.602533   29617 certs.go:256] generating profile certs ...
	I1011 21:16:46.602592   29617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:16:46.602638   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt with IP's: []
	I1011 21:16:46.782362   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt ...
	I1011 21:16:46.782395   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt: {Name:mk3593f4e91ffc0372a05bdad3e927ec316a91aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782596   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key ...
	I1011 21:16:46.782611   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key: {Name:mk9677876d62491747fdfd0e3f8d4776645d1f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782738   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7
	I1011 21:16:46.782756   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.254]
	I1011 21:16:47.380528   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 ...
	I1011 21:16:47.380560   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7: {Name:mk19e9d91179b46f9b03d4d9246179f41c3327ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380745   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 ...
	I1011 21:16:47.380776   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7: {Name:mk7fedd6c046987d5af851e2eed75ec367a33eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380872   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:16:47.380985   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:16:47.381067   29617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:16:47.381087   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt with IP's: []
	I1011 21:16:47.453906   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt ...
	I1011 21:16:47.453937   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt: {Name:mka90ed4c47ce0265f1b9da519124bd4fc73bbae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454114   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key ...
	I1011 21:16:47.454128   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key: {Name:mk47103fb5abe47f635456ba2a4ed9a69f678b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454230   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:16:47.454250   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:16:47.454266   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:16:47.454284   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:16:47.454303   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:16:47.454319   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:16:47.454335   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:16:47.454354   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:16:47.454417   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:16:47.454461   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:16:47.454473   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:16:47.454508   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:16:47.454543   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:16:47.454573   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:16:47.454648   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:47.454696   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.454719   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.454738   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.455273   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:16:47.481574   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:16:47.514683   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:16:47.538141   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:16:47.561021   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 21:16:47.585590   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:16:47.608816   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:16:47.632949   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:16:47.656849   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:16:47.680043   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:16:47.703417   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:16:47.726027   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:16:47.747378   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:16:47.754019   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:16:47.765407   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770565   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770631   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.776851   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:16:47.788126   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:16:47.799052   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803877   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803931   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.810054   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:16:47.821548   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:16:47.832817   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837775   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837829   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.843943   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:16:47.855398   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:16:47.859877   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:16:47.859928   29617 kubeadm.go:392] StartCluster: {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:47.860006   29617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:16:47.860081   29617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:16:47.903170   29617 cri.go:89] found id: ""
	I1011 21:16:47.903248   29617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 21:16:47.914400   29617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 21:16:47.924721   29617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 21:16:47.935673   29617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 21:16:47.935695   29617 kubeadm.go:157] found existing configuration files:
	
	I1011 21:16:47.935740   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 21:16:47.945454   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 21:16:47.945524   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 21:16:47.955440   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 21:16:47.964875   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 21:16:47.964944   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 21:16:47.974788   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.984258   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 21:16:47.984307   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.993726   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 21:16:48.002584   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 21:16:48.002650   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 21:16:48.012268   29617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 21:16:48.121155   29617 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 21:16:48.121351   29617 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 21:16:48.250203   29617 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 21:16:48.250314   29617 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 21:16:48.250452   29617 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 21:16:48.261245   29617 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 21:16:48.410718   29617 out.go:235]   - Generating certificates and keys ...
	I1011 21:16:48.410844   29617 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 21:16:48.410931   29617 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 21:16:48.542325   29617 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 21:16:48.608543   29617 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 21:16:48.797753   29617 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 21:16:48.873089   29617 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 21:16:49.070716   29617 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 21:16:49.071155   29617 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.372270   29617 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 21:16:49.372512   29617 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.423801   29617 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 21:16:49.655483   29617 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 21:16:49.724172   29617 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 21:16:49.724487   29617 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 21:16:50.017890   29617 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 21:16:50.285355   29617 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 21:16:50.392641   29617 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 21:16:50.748011   29617 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 21:16:50.984708   29617 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 21:16:50.985344   29617 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 21:16:50.988659   29617 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 21:16:50.990557   29617 out.go:235]   - Booting up control plane ...
	I1011 21:16:50.990675   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 21:16:50.990768   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 21:16:50.992112   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 21:16:51.010698   29617 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 21:16:51.019483   29617 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 21:16:51.019560   29617 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 21:16:51.165086   29617 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 21:16:51.165244   29617 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 21:16:51.666035   29617 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.408194ms
	I1011 21:16:51.666178   29617 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 21:16:58.166573   29617 kubeadm.go:310] [api-check] The API server is healthy after 6.502304408s
	I1011 21:16:58.179631   29617 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 21:16:58.195028   29617 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 21:16:58.220647   29617 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 21:16:58.220871   29617 kubeadm.go:310] [mark-control-plane] Marking the node ha-610874 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 21:16:58.236113   29617 kubeadm.go:310] [bootstrap-token] Using token: j1o64v.rjb74fe9bovjls5f
	I1011 21:16:58.237740   29617 out.go:235]   - Configuring RBAC rules ...
	I1011 21:16:58.237875   29617 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 21:16:58.245441   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 21:16:58.254162   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 21:16:58.259203   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 21:16:58.274345   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 21:16:58.278840   29617 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 21:16:58.578576   29617 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 21:16:59.008419   29617 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 21:16:59.573438   29617 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 21:16:59.574394   29617 kubeadm.go:310] 
	I1011 21:16:59.574519   29617 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 21:16:59.574537   29617 kubeadm.go:310] 
	I1011 21:16:59.574645   29617 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 21:16:59.574659   29617 kubeadm.go:310] 
	I1011 21:16:59.574685   29617 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 21:16:59.574753   29617 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 21:16:59.574825   29617 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 21:16:59.574836   29617 kubeadm.go:310] 
	I1011 21:16:59.574917   29617 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 21:16:59.574925   29617 kubeadm.go:310] 
	I1011 21:16:59.574988   29617 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 21:16:59.574998   29617 kubeadm.go:310] 
	I1011 21:16:59.575073   29617 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 21:16:59.575188   29617 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 21:16:59.575286   29617 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 21:16:59.575300   29617 kubeadm.go:310] 
	I1011 21:16:59.575406   29617 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 21:16:59.575519   29617 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 21:16:59.575533   29617 kubeadm.go:310] 
	I1011 21:16:59.575645   29617 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.575774   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 21:16:59.575812   29617 kubeadm.go:310] 	--control-plane 
	I1011 21:16:59.575825   29617 kubeadm.go:310] 
	I1011 21:16:59.575924   29617 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 21:16:59.575932   29617 kubeadm.go:310] 
	I1011 21:16:59.576044   29617 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.576197   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 21:16:59.576985   29617 kubeadm.go:310] W1011 21:16:48.086167     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577396   29617 kubeadm.go:310] W1011 21:16:48.087109     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577500   29617 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 21:16:59.577512   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:59.577520   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:59.579873   29617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 21:16:59.581130   29617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 21:16:59.586500   29617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 21:16:59.586517   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 21:16:59.606073   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 21:16:59.978632   29617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 21:16:59.978713   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:16:59.978732   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874 minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=true
	I1011 21:17:00.174706   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:00.174708   29617 ops.go:34] apiserver oom_adj: -16
	I1011 21:17:00.675693   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.174849   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.675518   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.174832   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.674899   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.174904   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.254520   29617 kubeadm.go:1113] duration metric: took 3.275873473s to wait for elevateKubeSystemPrivileges
	I1011 21:17:03.254557   29617 kubeadm.go:394] duration metric: took 15.394633584s to StartCluster
	I1011 21:17:03.254574   29617 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.254667   29617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.255426   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.255658   29617 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:03.255670   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 21:17:03.255683   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:17:03.255698   29617 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 21:17:03.255784   29617 addons.go:69] Setting storage-provisioner=true in profile "ha-610874"
	I1011 21:17:03.255803   29617 addons.go:234] Setting addon storage-provisioner=true in "ha-610874"
	I1011 21:17:03.255807   29617 addons.go:69] Setting default-storageclass=true in profile "ha-610874"
	I1011 21:17:03.255835   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.255840   29617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-610874"
	I1011 21:17:03.255868   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:03.256287   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256300   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.256367   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.271522   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I1011 21:17:03.271689   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I1011 21:17:03.272056   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272154   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272592   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272609   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272755   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272784   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272931   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273093   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.273112   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273524   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.273562   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.275146   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.275352   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 21:17:03.275763   29617 cert_rotation.go:140] Starting client certificate rotation controller
	I1011 21:17:03.275942   29617 addons.go:234] Setting addon default-storageclass=true in "ha-610874"
	I1011 21:17:03.275971   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.276303   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.276340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.288268   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1011 21:17:03.288701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.289186   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.289212   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.289573   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.289758   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.290984   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1011 21:17:03.291476   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.291798   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.292035   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.292052   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.292353   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.292786   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.292827   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.293969   29617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 21:17:03.295203   29617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.295223   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 21:17:03.295241   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.298221   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298669   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.298695   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298893   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.299039   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.299248   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.299371   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.307894   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33173
	I1011 21:17:03.308319   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.308780   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.308794   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.309115   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.309363   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.311112   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.311334   29617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.311352   29617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 21:17:03.311368   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.314487   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.314914   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.314938   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.315112   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.315274   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.315432   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.315580   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.390668   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 21:17:03.477039   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.523146   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.861068   29617 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1011 21:17:04.076843   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076867   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.076939   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076960   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077121   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077129   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077152   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077162   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077170   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077198   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077208   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077216   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077228   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077423   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077435   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077497   29617 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 21:17:04.077512   29617 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 21:17:04.077537   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077557   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077562   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077613   29617 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1011 21:17:04.077629   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.077640   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.077652   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.088649   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:04.089177   29617 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1011 21:17:04.089196   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.089204   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.089222   29617 round_trippers.go:473]     Content-Type: application/json
	I1011 21:17:04.089229   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.091300   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:04.091435   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.091450   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.091679   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.091716   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.091728   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.093543   29617 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1011 21:17:04.094783   29617 addons.go:510] duration metric: took 839.089678ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1011 21:17:04.094816   29617 start.go:246] waiting for cluster config update ...
	I1011 21:17:04.094834   29617 start.go:255] writing updated cluster config ...
	I1011 21:17:04.096346   29617 out.go:201] 
	I1011 21:17:04.097685   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:04.097746   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.099389   29617 out.go:177] * Starting "ha-610874-m02" control-plane node in "ha-610874" cluster
	I1011 21:17:04.100656   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:17:04.100673   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:17:04.100774   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:17:04.100788   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:17:04.100851   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.100998   29617 start.go:360] acquireMachinesLock for ha-610874-m02: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:17:04.101042   29617 start.go:364] duration metric: took 25.742µs to acquireMachinesLock for "ha-610874-m02"
	I1011 21:17:04.101063   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:04.101132   29617 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1011 21:17:04.102447   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:17:04.102519   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:04.102554   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:04.117018   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I1011 21:17:04.117574   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:04.118020   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:04.118046   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:04.118342   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:04.118495   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:04.118627   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:04.118734   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:17:04.118757   29617 client.go:168] LocalClient.Create starting
	I1011 21:17:04.118782   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:17:04.118814   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118825   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118865   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:17:04.118883   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118895   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118909   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:17:04.118916   29617 main.go:141] libmachine: (ha-610874-m02) Calling .PreCreateCheck
	I1011 21:17:04.119022   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:04.119344   29617 main.go:141] libmachine: Creating machine...
	I1011 21:17:04.119354   29617 main.go:141] libmachine: (ha-610874-m02) Calling .Create
	I1011 21:17:04.119448   29617 main.go:141] libmachine: (ha-610874-m02) Creating KVM machine...
	I1011 21:17:04.120553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing default KVM network
	I1011 21:17:04.120665   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing private KVM network mk-ha-610874
	I1011 21:17:04.120779   29617 main.go:141] libmachine: (ha-610874-m02) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.120796   29617 main.go:141] libmachine: (ha-610874-m02) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:17:04.120855   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.120779   29991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.120961   29617 main.go:141] libmachine: (ha-610874-m02) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:17:04.350121   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.350001   29991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa...
	I1011 21:17:04.441541   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441397   29991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk...
	I1011 21:17:04.441576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing magic tar header
	I1011 21:17:04.441591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing SSH key tar header
	I1011 21:17:04.441603   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441509   29991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.441619   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02
	I1011 21:17:04.441634   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:17:04.441650   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 (perms=drwx------)
	I1011 21:17:04.441661   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.441676   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:17:04.441687   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:17:04.441702   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:17:04.441718   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:17:04.441730   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:17:04.441739   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:17:04.441771   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:04.441793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:17:04.441805   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:17:04.441813   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home
	I1011 21:17:04.441826   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Skipping /home - not owner
	I1011 21:17:04.442818   29617 main.go:141] libmachine: (ha-610874-m02) define libvirt domain using xml: 
	I1011 21:17:04.442835   29617 main.go:141] libmachine: (ha-610874-m02) <domain type='kvm'>
	I1011 21:17:04.442851   29617 main.go:141] libmachine: (ha-610874-m02)   <name>ha-610874-m02</name>
	I1011 21:17:04.442859   29617 main.go:141] libmachine: (ha-610874-m02)   <memory unit='MiB'>2200</memory>
	I1011 21:17:04.442867   29617 main.go:141] libmachine: (ha-610874-m02)   <vcpu>2</vcpu>
	I1011 21:17:04.442876   29617 main.go:141] libmachine: (ha-610874-m02)   <features>
	I1011 21:17:04.442884   29617 main.go:141] libmachine: (ha-610874-m02)     <acpi/>
	I1011 21:17:04.442894   29617 main.go:141] libmachine: (ha-610874-m02)     <apic/>
	I1011 21:17:04.442901   29617 main.go:141] libmachine: (ha-610874-m02)     <pae/>
	I1011 21:17:04.442909   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.442916   29617 main.go:141] libmachine: (ha-610874-m02)   </features>
	I1011 21:17:04.442924   29617 main.go:141] libmachine: (ha-610874-m02)   <cpu mode='host-passthrough'>
	I1011 21:17:04.442929   29617 main.go:141] libmachine: (ha-610874-m02)   
	I1011 21:17:04.442935   29617 main.go:141] libmachine: (ha-610874-m02)   </cpu>
	I1011 21:17:04.442940   29617 main.go:141] libmachine: (ha-610874-m02)   <os>
	I1011 21:17:04.442944   29617 main.go:141] libmachine: (ha-610874-m02)     <type>hvm</type>
	I1011 21:17:04.442949   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='cdrom'/>
	I1011 21:17:04.442953   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='hd'/>
	I1011 21:17:04.442958   29617 main.go:141] libmachine: (ha-610874-m02)     <bootmenu enable='no'/>
	I1011 21:17:04.442966   29617 main.go:141] libmachine: (ha-610874-m02)   </os>
	I1011 21:17:04.442970   29617 main.go:141] libmachine: (ha-610874-m02)   <devices>
	I1011 21:17:04.442975   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='cdrom'>
	I1011 21:17:04.442982   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/boot2docker.iso'/>
	I1011 21:17:04.442988   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hdc' bus='scsi'/>
	I1011 21:17:04.442992   29617 main.go:141] libmachine: (ha-610874-m02)       <readonly/>
	I1011 21:17:04.442999   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443009   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='disk'>
	I1011 21:17:04.443018   29617 main.go:141] libmachine: (ha-610874-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:17:04.443028   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk'/>
	I1011 21:17:04.443033   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hda' bus='virtio'/>
	I1011 21:17:04.443037   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443042   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443047   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='mk-ha-610874'/>
	I1011 21:17:04.443052   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443057   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443061   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443066   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='default'/>
	I1011 21:17:04.443071   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443076   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443080   29617 main.go:141] libmachine: (ha-610874-m02)     <serial type='pty'>
	I1011 21:17:04.443085   29617 main.go:141] libmachine: (ha-610874-m02)       <target port='0'/>
	I1011 21:17:04.443089   29617 main.go:141] libmachine: (ha-610874-m02)     </serial>
	I1011 21:17:04.443094   29617 main.go:141] libmachine: (ha-610874-m02)     <console type='pty'>
	I1011 21:17:04.443099   29617 main.go:141] libmachine: (ha-610874-m02)       <target type='serial' port='0'/>
	I1011 21:17:04.443103   29617 main.go:141] libmachine: (ha-610874-m02)     </console>
	I1011 21:17:04.443109   29617 main.go:141] libmachine: (ha-610874-m02)     <rng model='virtio'>
	I1011 21:17:04.443137   29617 main.go:141] libmachine: (ha-610874-m02)       <backend model='random'>/dev/random</backend>
	I1011 21:17:04.443157   29617 main.go:141] libmachine: (ha-610874-m02)     </rng>
	I1011 21:17:04.443167   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443173   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443189   29617 main.go:141] libmachine: (ha-610874-m02)   </devices>
	I1011 21:17:04.443198   29617 main.go:141] libmachine: (ha-610874-m02) </domain>
	I1011 21:17:04.443208   29617 main.go:141] libmachine: (ha-610874-m02) 
	I1011 21:17:04.449596   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f0:af:4d in network default
	I1011 21:17:04.450115   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring networks are active...
	I1011 21:17:04.450137   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:04.450871   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network default is active
	I1011 21:17:04.451172   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network mk-ha-610874 is active
	I1011 21:17:04.451696   29617 main.go:141] libmachine: (ha-610874-m02) Getting domain xml...
	I1011 21:17:04.452466   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:05.723228   29617 main.go:141] libmachine: (ha-610874-m02) Waiting to get IP...
	I1011 21:17:05.723997   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.724437   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.724489   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.724421   29991 retry.go:31] will retry after 216.617717ms: waiting for machine to come up
	I1011 21:17:05.943023   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.943470   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.943493   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.943418   29991 retry.go:31] will retry after 323.475706ms: waiting for machine to come up
	I1011 21:17:06.268759   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.269130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.269185   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.269071   29991 retry.go:31] will retry after 341.815784ms: waiting for machine to come up
	I1011 21:17:06.612587   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.613044   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.613069   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.612994   29991 retry.go:31] will retry after 575.567056ms: waiting for machine to come up
	I1011 21:17:07.189626   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.190024   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.190052   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.189979   29991 retry.go:31] will retry after 508.01524ms: waiting for machine to come up
	I1011 21:17:07.699512   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.699870   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.699896   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.699824   29991 retry.go:31] will retry after 706.438375ms: waiting for machine to come up
	I1011 21:17:08.408130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:08.408534   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:08.408553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:08.408491   29991 retry.go:31] will retry after 819.845939ms: waiting for machine to come up
	I1011 21:17:09.229809   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:09.230337   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:09.230361   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:09.230274   29991 retry.go:31] will retry after 1.08916769s: waiting for machine to come up
	I1011 21:17:10.320875   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:10.321316   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:10.321344   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:10.321274   29991 retry.go:31] will retry after 1.825013226s: waiting for machine to come up
	I1011 21:17:12.148418   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:12.148892   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:12.148912   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:12.148854   29991 retry.go:31] will retry after 1.911054739s: waiting for machine to come up
	I1011 21:17:14.062931   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:14.063353   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:14.063381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:14.063300   29991 retry.go:31] will retry after 2.512289875s: waiting for machine to come up
	I1011 21:17:16.577169   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:16.577555   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:16.577580   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:16.577519   29991 retry.go:31] will retry after 3.376491238s: waiting for machine to come up
	I1011 21:17:19.955606   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:19.955968   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:19.955995   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:19.955923   29991 retry.go:31] will retry after 4.049589987s: waiting for machine to come up
	I1011 21:17:24.010143   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010574   29617 main.go:141] libmachine: (ha-610874-m02) Found IP for machine: 192.168.39.11
	I1011 21:17:24.010593   29617 main.go:141] libmachine: (ha-610874-m02) Reserving static IP address...
	I1011 21:17:24.010602   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010971   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "ha-610874-m02", mac: "52:54:00:f3:cf:5a", ip: "192.168.39.11"} in network mk-ha-610874
	I1011 21:17:24.079043   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:24.079077   29617 main.go:141] libmachine: (ha-610874-m02) Reserved static IP address: 192.168.39.11
	I1011 21:17:24.079093   29617 main.go:141] libmachine: (ha-610874-m02) Waiting for SSH to be available...
	I1011 21:17:24.081543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.081867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874
	I1011 21:17:24.081880   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:f3:cf:5a
	I1011 21:17:24.082047   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:24.082076   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:24.082376   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:24.082572   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:24.082591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:24.086567   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:17:24.086597   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:17:24.086608   29617 main.go:141] libmachine: (ha-610874-m02) DBG | command : exit 0
	I1011 21:17:24.086627   29617 main.go:141] libmachine: (ha-610874-m02) DBG | err     : exit status 255
	I1011 21:17:24.086641   29617 main.go:141] libmachine: (ha-610874-m02) DBG | output  : 
	I1011 21:17:27.089089   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:27.091628   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.091976   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.092001   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.092162   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:27.092189   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:27.092213   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:27.092221   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:27.092230   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:27.218963   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: <nil>: 
	I1011 21:17:27.219245   29617 main.go:141] libmachine: (ha-610874-m02) KVM machine creation complete!
	I1011 21:17:27.219616   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:27.220149   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220344   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220511   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:17:27.220532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetState
	I1011 21:17:27.221755   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:17:27.221770   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:17:27.221778   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:17:27.221786   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.223867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224229   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.224267   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224374   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.224532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224655   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224768   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.224964   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.225164   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.225177   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:17:27.333813   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.333841   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:17:27.333852   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.336538   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.336885   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.336909   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.337071   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.337262   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337411   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337545   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.337696   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.337866   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.337878   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:17:27.447511   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:17:27.447576   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:17:27.447583   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:17:27.447590   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.447842   29617 buildroot.go:166] provisioning hostname "ha-610874-m02"
	I1011 21:17:27.447866   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.448033   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.450381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450763   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.450793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450924   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.451086   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451309   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451419   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.451547   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.451737   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.451749   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m02 && echo "ha-610874-m02" | sudo tee /etc/hostname
	I1011 21:17:27.572801   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m02
	
	I1011 21:17:27.572834   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.575352   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575751   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.575776   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575941   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.576093   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576220   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576346   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.576461   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.576637   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.576661   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:17:27.695886   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.695916   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:17:27.695938   29617 buildroot.go:174] setting up certificates
	I1011 21:17:27.695952   29617 provision.go:84] configureAuth start
	I1011 21:17:27.695968   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.696239   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:27.698924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699311   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.699342   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699459   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.701614   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.701924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.701942   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.702129   29617 provision.go:143] copyHostCerts
	I1011 21:17:27.702158   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702190   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:17:27.702199   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702263   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:17:27.702355   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702381   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:17:27.702389   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702438   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:17:27.702535   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702560   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:17:27.702567   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702604   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:17:27.702691   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m02 san=[127.0.0.1 192.168.39.11 ha-610874-m02 localhost minikube]
	I1011 21:17:27.916455   29617 provision.go:177] copyRemoteCerts
	I1011 21:17:27.916517   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:17:27.916546   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.919220   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919586   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.919612   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919767   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.919931   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.920084   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.920214   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.005137   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:17:28.005206   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:17:28.030798   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:17:28.030868   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:17:28.053929   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:17:28.053992   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:17:28.077344   29617 provision.go:87] duration metric: took 381.381213ms to configureAuth
	I1011 21:17:28.077368   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:17:28.077553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:28.077631   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.079998   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080363   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.080391   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080550   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.080711   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080860   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080957   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.081126   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.081276   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.081289   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:17:28.305072   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:17:28.305099   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:17:28.305107   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetURL
	I1011 21:17:28.306348   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using libvirt version 6000000
	I1011 21:17:28.308766   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309119   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.309148   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309322   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:17:28.309336   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:17:28.309345   29617 client.go:171] duration metric: took 24.190578436s to LocalClient.Create
	I1011 21:17:28.309369   29617 start.go:167] duration metric: took 24.190632715s to libmachine.API.Create "ha-610874"
	I1011 21:17:28.309380   29617 start.go:293] postStartSetup for "ha-610874-m02" (driver="kvm2")
	I1011 21:17:28.309393   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:17:28.309414   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.309649   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:17:28.309678   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.311900   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312234   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.312257   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312366   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.312513   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.312670   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.312813   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.401258   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:17:28.405713   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:17:28.405741   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:17:28.405819   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:17:28.405893   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:17:28.405901   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:17:28.405976   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:17:28.415792   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:28.439288   29617 start.go:296] duration metric: took 129.894011ms for postStartSetup
	I1011 21:17:28.439338   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:28.439884   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.442343   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442733   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.442761   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442929   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:28.443099   29617 start.go:128] duration metric: took 24.341953324s to createHost
	I1011 21:17:28.443119   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.445585   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.445871   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.445894   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.446037   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.446185   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446313   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446509   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.446712   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.446859   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.446869   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:17:28.555655   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681448.532334020
	
	I1011 21:17:28.555684   29617 fix.go:216] guest clock: 1728681448.532334020
	I1011 21:17:28.555698   29617 fix.go:229] Guest: 2024-10-11 21:17:28.53233402 +0000 UTC Remote: 2024-10-11 21:17:28.443109707 +0000 UTC m=+72.164953096 (delta=89.224313ms)
	I1011 21:17:28.555717   29617 fix.go:200] guest clock delta is within tolerance: 89.224313ms
	I1011 21:17:28.555723   29617 start.go:83] releasing machines lock for "ha-610874-m02", held for 24.454670186s
	I1011 21:17:28.555747   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.555979   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.558215   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.558576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.558610   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.560996   29617 out.go:177] * Found network options:
	I1011 21:17:28.562345   29617 out.go:177]   - NO_PROXY=192.168.39.10
	W1011 21:17:28.563437   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.563463   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.563914   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564081   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564167   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:17:28.564198   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	W1011 21:17:28.564293   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.564371   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:17:28.564394   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.566543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566887   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.566920   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566948   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567066   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567235   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567341   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.567349   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567359   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567462   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.567515   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567649   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567774   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567889   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.804794   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:17:28.816172   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:17:28.816234   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:17:28.833684   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:17:28.833707   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:17:28.833785   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:17:28.850682   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:17:28.865268   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:17:28.865314   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:17:28.879804   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:17:28.893790   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:17:29.005060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:17:29.161552   29617 docker.go:233] disabling docker service ...
	I1011 21:17:29.161623   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:17:29.176030   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:17:29.188905   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:17:29.314012   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:17:29.444969   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:17:29.458929   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:17:29.477279   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:17:29.477336   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.487485   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:17:29.487557   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.497725   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.508074   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.518078   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:17:29.528405   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.538441   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.555119   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.568308   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:17:29.578239   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:17:29.578297   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:17:29.591777   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:17:29.601766   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:29.733693   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:17:29.832686   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:17:29.832769   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:17:29.837474   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:17:29.837531   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:17:29.841328   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:17:29.885910   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:17:29.885997   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.915959   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.947445   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:17:29.948743   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:17:29.949776   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:29.952438   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952742   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:29.952767   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952926   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:17:29.957045   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:29.969401   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:17:29.969618   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:29.969904   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.969953   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:29.984875   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I1011 21:17:29.985308   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:29.985749   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:29.985772   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:29.986088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:29.986307   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:29.987951   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:29.988270   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.988309   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:30.002903   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I1011 21:17:30.003325   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:30.003771   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:30.003791   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:30.004088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:30.004322   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:30.004478   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.11
	I1011 21:17:30.004490   29617 certs.go:194] generating shared ca certs ...
	I1011 21:17:30.004507   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.004645   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:17:30.004706   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:17:30.004720   29617 certs.go:256] generating profile certs ...
	I1011 21:17:30.004812   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:17:30.004845   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a
	I1011 21:17:30.004865   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.254]
	I1011 21:17:30.068798   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a ...
	I1011 21:17:30.068829   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a: {Name:mk7e577273a37f1215e925a89aaf2057d9d70c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069010   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a ...
	I1011 21:17:30.069026   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a: {Name:mk272cb1eed2069075ccbf59f795f6618abcd353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069135   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:17:30.069298   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:17:30.069453   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:17:30.069470   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:17:30.069497   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:17:30.069514   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:17:30.069533   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:17:30.069553   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:17:30.069571   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:17:30.069589   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:17:30.069614   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:17:30.069674   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:17:30.069714   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:17:30.069727   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:17:30.069761   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:17:30.069795   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:17:30.069830   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:17:30.069888   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:30.069930   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.069950   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.069968   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.070008   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:30.073028   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073411   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:30.073439   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073677   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:30.073887   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:30.074102   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:30.074339   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:30.150977   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:17:30.155841   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:17:30.167973   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:17:30.172398   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:17:30.183178   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:17:30.187494   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:17:30.198396   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:17:30.202690   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:17:30.213924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:17:30.218228   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:17:30.229999   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:17:30.234409   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:17:30.246054   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:17:30.271630   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:17:30.295598   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:17:30.320158   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:17:30.346169   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1011 21:17:30.370669   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 21:17:30.396095   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:17:30.424361   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:17:30.449179   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:17:30.473592   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:17:30.497140   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:17:30.520773   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:17:30.537475   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:17:30.553696   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:17:30.573515   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:17:30.591050   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:17:30.607456   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:17:30.623663   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:17:30.639999   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:17:30.645863   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:17:30.656839   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661661   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661737   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.667927   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:17:30.678586   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:17:30.690465   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695106   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695178   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.700843   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:17:30.711530   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:17:30.722262   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726883   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726930   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.732484   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:17:30.743130   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:17:30.747324   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:17:30.747378   29617 kubeadm.go:934] updating node {m02 192.168.39.11 8443 v1.31.1 crio true true} ...
	I1011 21:17:30.747471   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:17:30.747503   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:17:30.747550   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:17:30.764827   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:17:30.764898   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:17:30.764958   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.774946   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:17:30.775004   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.785084   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:17:30.785115   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785173   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785210   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1011 21:17:30.785254   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1011 21:17:30.789999   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:17:30.790028   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:17:31.801070   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.801149   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.806312   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:17:31.806341   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:17:31.977093   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:17:32.035477   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.035590   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.049208   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:17:32.049241   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:17:32.383282   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:17:32.393090   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1011 21:17:32.409524   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:17:32.426347   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:17:32.443202   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:17:32.447193   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:32.459719   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:32.593682   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:32.611619   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:32.611941   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:32.611988   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:32.626650   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1011 21:17:32.627104   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:32.627665   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:32.627681   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:32.627997   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:32.628209   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:32.628355   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:17:32.628464   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:17:32.628490   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:32.631170   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631565   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:32.631594   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631751   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:32.631931   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:32.632068   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:32.632206   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:32.785858   29617 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:32.785905   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443"
	I1011 21:17:54.047983   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443": (21.262048482s)
	I1011 21:17:54.048020   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:17:54.524404   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m02 minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:17:54.662523   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:17:54.782630   29617 start.go:319] duration metric: took 22.154260063s to joinCluster
	I1011 21:17:54.782703   29617 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:54.782988   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:54.784979   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:17:54.786144   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:55.109738   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:55.128457   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:55.128804   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:17:55.128882   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:17:55.129129   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:17:55.129231   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.129241   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.129252   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.129258   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.140234   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:55.629803   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.629830   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.629841   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.629847   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.633275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.129516   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.129541   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.129552   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.129559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.132902   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.629511   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.629534   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.634698   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:17:57.129572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.129597   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.129605   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.129609   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.132668   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:57.133230   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:57.629639   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.629659   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.629667   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.629670   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.632880   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:58.129393   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.129417   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.129441   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.129446   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.132403   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:58.629999   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.630018   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.630026   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.630030   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.633746   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.130079   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.130096   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.130104   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.130108   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.133281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.133973   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:59.629323   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.629347   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.629358   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.629364   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.632796   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.129728   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.129749   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.129758   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.129767   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.133151   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.629977   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.630003   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.630015   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.630021   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.633099   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.130138   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.130160   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.130171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.130182   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.133307   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.134143   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:01.630135   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.630158   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.630171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.630177   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.634516   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:02.129957   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.129977   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.129985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.129990   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.209108   29617 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I1011 21:18:02.630223   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.630241   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.630249   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.630254   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.633360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:03.130145   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.130165   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.130172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.130176   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.134521   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:03.135482   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:03.630325   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.630348   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.630357   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.630363   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.633906   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.129848   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.129869   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.129880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.129885   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.133353   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.630352   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.630378   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.630391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.630395   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.633784   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:05.129622   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.129647   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.129658   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.129664   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.174718   29617 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1011 21:18:05.175206   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:05.629573   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.629601   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.629610   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.629614   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.633377   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.129366   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.129388   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.129396   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.129399   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.132592   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.630152   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.630174   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.630184   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.630190   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.633604   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.130251   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.130280   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.130292   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.130299   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.133640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.629546   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.629568   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.629578   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.629583   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.632932   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.633891   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:08.129786   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.129803   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.129811   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.129815   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.133290   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:08.629506   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.629533   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.633075   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.129541   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.129559   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.129567   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.129572   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.132640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.629665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.629684   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.629692   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.629697   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.632858   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:10.129866   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.129885   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.129893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.129897   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.132615   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:10.133150   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:10.629443   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.629475   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.629489   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.629493   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.632970   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.130002   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.130024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.130032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.130035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.133677   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.629439   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.629465   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.629477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.629482   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.632816   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.130049   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.130071   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.130080   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.130083   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.133179   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.133716   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:12.630085   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.630110   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.630121   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.630127   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.633114   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:13.130226   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.130245   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.130253   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.130258   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.133707   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:13.629976   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.630005   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.630016   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.630022   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.633601   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.129823   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.129846   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.129857   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.129863   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.132927   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.630032   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.630053   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.630062   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.630070   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.633208   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.633750   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:15.129885   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.129909   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.129919   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.129924   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.132958   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.630000   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.630024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.630032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.630035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.632986   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.633633   29617 node_ready.go:49] node "ha-610874-m02" has status "Ready":"True"
	I1011 21:18:15.633647   29617 node_ready.go:38] duration metric: took 20.504503338s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:18:15.633655   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:15.633709   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:15.633718   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.633724   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.633728   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.637582   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.643886   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.643972   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:18:15.643983   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.643993   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.643999   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.646763   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.647514   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.647529   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.647536   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.647539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.649945   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.650586   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.650602   29617 pod_ready.go:82] duration metric: took 6.694777ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650623   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650679   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:18:15.650688   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.650699   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.650707   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.652943   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.653673   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.653687   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.653696   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.653701   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.655886   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.656382   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.656397   29617 pod_ready.go:82] duration metric: took 5.765488ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656405   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656451   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:18:15.656461   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.656471   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.656477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.658729   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.659391   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.659409   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.659419   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.659426   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.661629   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.662114   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.662130   29617 pod_ready.go:82] duration metric: took 5.719352ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662137   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662181   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:18:15.662190   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.662197   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.662201   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.664800   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.665273   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.665286   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.665294   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.665298   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.667272   29617 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1011 21:18:15.667736   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.667757   29617 pod_ready.go:82] duration metric: took 5.613486ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.667773   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.830074   29617 request.go:632] Waited for 162.243136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830160   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830168   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.830178   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.830188   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.833590   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.030666   29617 request.go:632] Waited for 196.378996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030722   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030728   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.030735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.030739   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.033962   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.034580   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.034599   29617 pod_ready.go:82] duration metric: took 366.81416ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.034608   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.230672   29617 request.go:632] Waited for 195.982779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230778   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230790   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.230801   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.230810   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.234030   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.430609   29617 request.go:632] Waited for 195.69013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430701   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430712   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.430735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.433742   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.434219   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.434239   29617 pod_ready.go:82] duration metric: took 399.609699ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.434252   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.630260   29617 request.go:632] Waited for 195.941074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630337   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630342   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.630350   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.630357   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.633657   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.830752   29617 request.go:632] Waited for 196.369395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830804   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830811   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.830820   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.830827   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.833807   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.834437   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.834455   29617 pod_ready.go:82] duration metric: took 400.195609ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.834465   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.030516   29617 request.go:632] Waited for 195.993213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030589   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030595   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.030607   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.030627   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.034122   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.230257   29617 request.go:632] Waited for 195.302255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230322   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230329   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.230337   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.230342   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.233560   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.234217   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.234239   29617 pod_ready.go:82] duration metric: took 399.767293ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.234256   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.430433   29617 request.go:632] Waited for 196.107897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430509   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430515   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.430526   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.430534   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.434262   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.630356   29617 request.go:632] Waited for 195.345057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630426   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630431   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.630439   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.630444   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.633591   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.634036   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.634054   29617 pod_ready.go:82] duration metric: took 399.790817ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.634064   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.830520   29617 request.go:632] Waited for 196.385742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830591   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830596   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.830603   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.830607   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.833974   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.030999   29617 request.go:632] Waited for 196.369359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031062   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031068   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.031075   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.031079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.034522   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.035045   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.035060   29617 pod_ready.go:82] duration metric: took 400.990689ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.035069   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.230101   29617 request.go:632] Waited for 194.964535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230173   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230179   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.230187   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.230191   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.233153   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:18.430174   29617 request.go:632] Waited for 196.304225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430252   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430258   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.430265   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.430271   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.433684   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.434857   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.434876   29617 pod_ready.go:82] duration metric: took 399.800525ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.434886   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.630997   29617 request.go:632] Waited for 196.051862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631067   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631072   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.631079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.631090   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.634569   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.830555   29617 request.go:632] Waited for 195.378028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830645   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830652   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.830659   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.830665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.834017   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.834881   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.834901   29617 pod_ready.go:82] duration metric: took 400.009355ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.834913   29617 pod_ready.go:39] duration metric: took 3.201246724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:18.834925   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:18:18.834977   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:18:18.851851   29617 api_server.go:72] duration metric: took 24.069111498s to wait for apiserver process to appear ...
	I1011 21:18:18.851878   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:18:18.851897   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:18:18.856543   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:18:18.856610   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:18:18.856615   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.856622   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.856626   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.857613   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:18:18.857701   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:18:18.857721   29617 api_server.go:131] duration metric: took 5.836547ms to wait for apiserver health ...
	I1011 21:18:18.857730   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:18:19.030066   29617 request.go:632] Waited for 172.254223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030130   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030136   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.030143   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.030148   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.034696   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.039508   29617 system_pods.go:59] 17 kube-system pods found
	I1011 21:18:19.039540   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.039546   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.039551   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.039557   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.039561   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.039566   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.039570   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.039579   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.039584   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.039592   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.039597   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.039601   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.039606   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.039612   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.039615   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.039619   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.039622   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.039631   29617 system_pods.go:74] duration metric: took 181.896084ms to wait for pod list to return data ...
	I1011 21:18:19.039640   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:18:19.230981   29617 request.go:632] Waited for 191.269571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231051   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231057   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.231064   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.231067   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.235209   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.235407   29617 default_sa.go:45] found service account: "default"
	I1011 21:18:19.235421   29617 default_sa.go:55] duration metric: took 195.775642ms for default service account to be created ...
	I1011 21:18:19.235428   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:18:19.430605   29617 request.go:632] Waited for 195.123077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430704   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430710   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.430718   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.435793   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:18:19.439894   29617 system_pods.go:86] 17 kube-system pods found
	I1011 21:18:19.439921   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.439929   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.439935   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.439942   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.439947   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.439953   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.439959   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.439965   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.439972   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.439980   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.439986   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.439995   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.440002   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.440010   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.440016   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.440020   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.440025   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.440033   29617 system_pods.go:126] duration metric: took 204.599583ms to wait for k8s-apps to be running ...
	I1011 21:18:19.440045   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:18:19.440094   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:19.455815   29617 system_svc.go:56] duration metric: took 15.763998ms WaitForService to wait for kubelet
	I1011 21:18:19.455841   29617 kubeadm.go:582] duration metric: took 24.673107672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:18:19.455860   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:18:19.630302   29617 request.go:632] Waited for 174.358774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630357   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630364   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.630372   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.630379   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.634356   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:19.635316   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635343   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635358   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635363   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635371   29617 node_conditions.go:105] duration metric: took 179.50548ms to run NodePressure ...
	I1011 21:18:19.635384   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:18:19.635415   29617 start.go:255] writing updated cluster config ...
	I1011 21:18:19.637553   29617 out.go:201] 
	I1011 21:18:19.638933   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:19.639018   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.640415   29617 out.go:177] * Starting "ha-610874-m03" control-plane node in "ha-610874" cluster
	I1011 21:18:19.641511   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:18:19.641529   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:18:19.641627   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:18:19.641638   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:18:19.641712   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.641856   29617 start.go:360] acquireMachinesLock for ha-610874-m03: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:18:19.641897   29617 start.go:364] duration metric: took 24.129µs to acquireMachinesLock for "ha-610874-m03"
	I1011 21:18:19.641912   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:19.642000   29617 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1011 21:18:19.643322   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:18:19.643394   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:19.643424   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:19.657905   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I1011 21:18:19.658394   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:19.658868   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:19.658887   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:19.659186   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:19.659360   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:19.659497   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:19.659661   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:18:19.659689   29617 client.go:168] LocalClient.Create starting
	I1011 21:18:19.659716   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:18:19.659744   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659756   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659802   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:18:19.659820   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659830   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659844   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:18:19.659851   29617 main.go:141] libmachine: (ha-610874-m03) Calling .PreCreateCheck
	I1011 21:18:19.659994   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:19.660351   29617 main.go:141] libmachine: Creating machine...
	I1011 21:18:19.660362   29617 main.go:141] libmachine: (ha-610874-m03) Calling .Create
	I1011 21:18:19.660504   29617 main.go:141] libmachine: (ha-610874-m03) Creating KVM machine...
	I1011 21:18:19.661678   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing default KVM network
	I1011 21:18:19.661785   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing private KVM network mk-ha-610874
	I1011 21:18:19.661907   29617 main.go:141] libmachine: (ha-610874-m03) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.661930   29617 main.go:141] libmachine: (ha-610874-m03) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:18:19.662023   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.661913   30793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.662086   29617 main.go:141] libmachine: (ha-610874-m03) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:18:19.893907   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.893764   30793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa...
	I1011 21:18:19.985249   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985139   30793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk...
	I1011 21:18:19.985285   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing magic tar header
	I1011 21:18:19.985300   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing SSH key tar header
	I1011 21:18:19.985311   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985257   30793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.985329   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03
	I1011 21:18:19.985350   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 (perms=drwx------)
	I1011 21:18:19.985373   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:18:19.985396   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.985411   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:18:19.985426   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:18:19.985434   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:18:19.985440   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:18:19.985456   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:18:19.985468   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:18:19.985478   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:18:19.985499   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:18:19.985509   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:19.985516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home
	I1011 21:18:19.985526   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Skipping /home - not owner
	I1011 21:18:19.986460   29617 main.go:141] libmachine: (ha-610874-m03) define libvirt domain using xml: 
	I1011 21:18:19.986487   29617 main.go:141] libmachine: (ha-610874-m03) <domain type='kvm'>
	I1011 21:18:19.986497   29617 main.go:141] libmachine: (ha-610874-m03)   <name>ha-610874-m03</name>
	I1011 21:18:19.986505   29617 main.go:141] libmachine: (ha-610874-m03)   <memory unit='MiB'>2200</memory>
	I1011 21:18:19.986513   29617 main.go:141] libmachine: (ha-610874-m03)   <vcpu>2</vcpu>
	I1011 21:18:19.986528   29617 main.go:141] libmachine: (ha-610874-m03)   <features>
	I1011 21:18:19.986539   29617 main.go:141] libmachine: (ha-610874-m03)     <acpi/>
	I1011 21:18:19.986547   29617 main.go:141] libmachine: (ha-610874-m03)     <apic/>
	I1011 21:18:19.986559   29617 main.go:141] libmachine: (ha-610874-m03)     <pae/>
	I1011 21:18:19.986567   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.986578   29617 main.go:141] libmachine: (ha-610874-m03)   </features>
	I1011 21:18:19.986587   29617 main.go:141] libmachine: (ha-610874-m03)   <cpu mode='host-passthrough'>
	I1011 21:18:19.986598   29617 main.go:141] libmachine: (ha-610874-m03)   
	I1011 21:18:19.986605   29617 main.go:141] libmachine: (ha-610874-m03)   </cpu>
	I1011 21:18:19.986657   29617 main.go:141] libmachine: (ha-610874-m03)   <os>
	I1011 21:18:19.986683   29617 main.go:141] libmachine: (ha-610874-m03)     <type>hvm</type>
	I1011 21:18:19.986694   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='cdrom'/>
	I1011 21:18:19.986706   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='hd'/>
	I1011 21:18:19.986714   29617 main.go:141] libmachine: (ha-610874-m03)     <bootmenu enable='no'/>
	I1011 21:18:19.986723   29617 main.go:141] libmachine: (ha-610874-m03)   </os>
	I1011 21:18:19.986733   29617 main.go:141] libmachine: (ha-610874-m03)   <devices>
	I1011 21:18:19.986743   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='cdrom'>
	I1011 21:18:19.986759   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/boot2docker.iso'/>
	I1011 21:18:19.986773   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hdc' bus='scsi'/>
	I1011 21:18:19.986784   29617 main.go:141] libmachine: (ha-610874-m03)       <readonly/>
	I1011 21:18:19.986793   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986804   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='disk'>
	I1011 21:18:19.986816   29617 main.go:141] libmachine: (ha-610874-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:18:19.986831   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk'/>
	I1011 21:18:19.986840   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hda' bus='virtio'/>
	I1011 21:18:19.986871   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986898   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986911   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='mk-ha-610874'/>
	I1011 21:18:19.986922   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986933   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986941   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986948   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='default'/>
	I1011 21:18:19.986962   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986972   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986987   29617 main.go:141] libmachine: (ha-610874-m03)     <serial type='pty'>
	I1011 21:18:19.986999   29617 main.go:141] libmachine: (ha-610874-m03)       <target port='0'/>
	I1011 21:18:19.987006   29617 main.go:141] libmachine: (ha-610874-m03)     </serial>
	I1011 21:18:19.987015   29617 main.go:141] libmachine: (ha-610874-m03)     <console type='pty'>
	I1011 21:18:19.987025   29617 main.go:141] libmachine: (ha-610874-m03)       <target type='serial' port='0'/>
	I1011 21:18:19.987033   29617 main.go:141] libmachine: (ha-610874-m03)     </console>
	I1011 21:18:19.987052   29617 main.go:141] libmachine: (ha-610874-m03)     <rng model='virtio'>
	I1011 21:18:19.987060   29617 main.go:141] libmachine: (ha-610874-m03)       <backend model='random'>/dev/random</backend>
	I1011 21:18:19.987068   29617 main.go:141] libmachine: (ha-610874-m03)     </rng>
	I1011 21:18:19.987076   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987087   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987095   29617 main.go:141] libmachine: (ha-610874-m03)   </devices>
	I1011 21:18:19.987107   29617 main.go:141] libmachine: (ha-610874-m03) </domain>
	I1011 21:18:19.987120   29617 main.go:141] libmachine: (ha-610874-m03) 
	I1011 21:18:19.993869   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:ec:a1:8a in network default
	I1011 21:18:19.994634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:19.994661   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring networks are active...
	I1011 21:18:19.995468   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network default is active
	I1011 21:18:19.995798   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network mk-ha-610874 is active
	I1011 21:18:19.996173   29617 main.go:141] libmachine: (ha-610874-m03) Getting domain xml...
	I1011 21:18:19.996928   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:21.254226   29617 main.go:141] libmachine: (ha-610874-m03) Waiting to get IP...
	I1011 21:18:21.254939   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.255287   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.255333   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.255277   30793 retry.go:31] will retry after 299.921958ms: waiting for machine to come up
	I1011 21:18:21.557116   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.557606   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.557634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.557554   30793 retry.go:31] will retry after 286.000289ms: waiting for machine to come up
	I1011 21:18:21.844948   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.845467   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.845490   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.845417   30793 retry.go:31] will retry after 387.119662ms: waiting for machine to come up
	I1011 21:18:22.233861   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.234347   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.234371   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.234316   30793 retry.go:31] will retry after 432.218769ms: waiting for machine to come up
	I1011 21:18:22.667570   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.668013   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.668044   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.667960   30793 retry.go:31] will retry after 681.692732ms: waiting for machine to come up
	I1011 21:18:23.350671   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:23.351087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:23.351114   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:23.351059   30793 retry.go:31] will retry after 838.189989ms: waiting for machine to come up
	I1011 21:18:24.191008   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:24.191479   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:24.191510   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:24.191434   30793 retry.go:31] will retry after 815.751815ms: waiting for machine to come up
	I1011 21:18:25.008738   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:25.009063   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:25.009087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:25.009033   30793 retry.go:31] will retry after 1.238801147s: waiting for machine to come up
	I1011 21:18:26.249732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:26.250130   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:26.250160   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:26.250077   30793 retry.go:31] will retry after 1.384996284s: waiting for machine to come up
	I1011 21:18:27.636107   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:27.636581   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:27.636616   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:27.636560   30793 retry.go:31] will retry after 2.228451179s: waiting for machine to come up
	I1011 21:18:29.866214   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:29.866564   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:29.866592   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:29.866517   30793 retry.go:31] will retry after 2.670642081s: waiting for machine to come up
	I1011 21:18:32.539631   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:32.539928   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:32.539955   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:32.539912   30793 retry.go:31] will retry after 2.348031686s: waiting for machine to come up
	I1011 21:18:34.889816   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:34.890238   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:34.890284   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:34.890163   30793 retry.go:31] will retry after 4.066011924s: waiting for machine to come up
	I1011 21:18:38.960327   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:38.960729   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:38.960754   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:38.960678   30793 retry.go:31] will retry after 5.543915191s: waiting for machine to come up
	I1011 21:18:44.509752   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510179   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has current primary IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510202   29617 main.go:141] libmachine: (ha-610874-m03) Found IP for machine: 192.168.39.222
	I1011 21:18:44.510223   29617 main.go:141] libmachine: (ha-610874-m03) Reserving static IP address...
	I1011 21:18:44.510657   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find host DHCP lease matching {name: "ha-610874-m03", mac: "52:54:00:54:11:ff", ip: "192.168.39.222"} in network mk-ha-610874
	I1011 21:18:44.581123   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Getting to WaitForSSH function...
	I1011 21:18:44.581152   29617 main.go:141] libmachine: (ha-610874-m03) Reserved static IP address: 192.168.39.222
	I1011 21:18:44.581189   29617 main.go:141] libmachine: (ha-610874-m03) Waiting for SSH to be available...
	I1011 21:18:44.584495   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585006   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.585034   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585216   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH client type: external
	I1011 21:18:44.585245   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa (-rw-------)
	I1011 21:18:44.585269   29617 main.go:141] libmachine: (ha-610874-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:18:44.585288   29617 main.go:141] libmachine: (ha-610874-m03) DBG | About to run SSH command:
	I1011 21:18:44.585303   29617 main.go:141] libmachine: (ha-610874-m03) DBG | exit 0
	I1011 21:18:44.714704   29617 main.go:141] libmachine: (ha-610874-m03) DBG | SSH cmd err, output: <nil>: 
	I1011 21:18:44.714970   29617 main.go:141] libmachine: (ha-610874-m03) KVM machine creation complete!
	I1011 21:18:44.715289   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:44.715822   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.715996   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.716157   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:18:44.716172   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetState
	I1011 21:18:44.717356   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:18:44.717371   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:18:44.717376   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:18:44.717382   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.719703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.719994   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.720030   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.720182   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.720357   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720507   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.720910   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.721104   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.721116   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:18:44.833939   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:44.833957   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:18:44.833964   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.836658   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837043   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.837069   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837281   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.837454   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837581   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837720   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.837855   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.838048   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.838063   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:18:44.951348   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:18:44.951417   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:18:44.951426   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:18:44.951433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951662   29617 buildroot.go:166] provisioning hostname "ha-610874-m03"
	I1011 21:18:44.951688   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951865   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.954732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955115   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.955139   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955310   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.955477   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955769   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.955914   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.956070   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.956081   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m03 && echo "ha-610874-m03" | sudo tee /etc/hostname
	I1011 21:18:45.085832   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m03
	
	I1011 21:18:45.085866   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.088705   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089140   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.089165   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089355   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.089596   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089767   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089921   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.090058   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.090210   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.090224   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:18:45.213456   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:45.213485   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:18:45.213503   29617 buildroot.go:174] setting up certificates
	I1011 21:18:45.213511   29617 provision.go:84] configureAuth start
	I1011 21:18:45.213520   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:45.213850   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.216516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.216909   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.216945   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.217058   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.219374   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219692   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.219725   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219870   29617 provision.go:143] copyHostCerts
	I1011 21:18:45.219895   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.219927   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:18:45.219936   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.220002   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:18:45.220073   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220091   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:18:45.220098   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220120   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:18:45.220162   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220179   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:18:45.220186   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220212   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:18:45.220261   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m03 san=[127.0.0.1 192.168.39.222 ha-610874-m03 localhost minikube]
	I1011 21:18:45.381567   29617 provision.go:177] copyRemoteCerts
	I1011 21:18:45.381648   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:18:45.381676   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.384744   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385058   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.385090   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385241   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.385433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.385594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.385733   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.474156   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:18:45.474223   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:18:45.499839   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:18:45.499913   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:18:45.523935   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:18:45.524000   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:18:45.546732   29617 provision.go:87] duration metric: took 333.208457ms to configureAuth
	I1011 21:18:45.546761   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:18:45.546986   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:45.547077   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.549423   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549746   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.549774   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549963   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.550145   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550309   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550436   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.550559   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.550750   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.550765   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:18:45.793129   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:18:45.793158   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:18:45.793166   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetURL
	I1011 21:18:45.794426   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using libvirt version 6000000
	I1011 21:18:45.796703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797072   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.797104   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797300   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:18:45.797313   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:18:45.797320   29617 client.go:171] duration metric: took 26.137622442s to LocalClient.Create
	I1011 21:18:45.797348   29617 start.go:167] duration metric: took 26.137680612s to libmachine.API.Create "ha-610874"
	I1011 21:18:45.797358   29617 start.go:293] postStartSetup for "ha-610874-m03" (driver="kvm2")
	I1011 21:18:45.797373   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:18:45.797391   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:45.797597   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:18:45.797632   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.799512   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799830   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.799859   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799989   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.800143   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.800296   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.800459   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.889596   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:18:45.893814   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:18:45.893840   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:18:45.893920   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:18:45.893992   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:18:45.894000   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:18:45.894078   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:18:45.903909   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:45.928066   29617 start.go:296] duration metric: took 130.695494ms for postStartSetup
	I1011 21:18:45.928125   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:45.928694   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.931370   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.931736   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.931757   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.932008   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:45.932227   29617 start.go:128] duration metric: took 26.290217466s to createHost
	I1011 21:18:45.932255   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.934599   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.934957   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.934980   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.935141   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.935302   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935450   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.935755   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.935906   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.935915   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:18:46.051363   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681526.030608830
	
	I1011 21:18:46.051382   29617 fix.go:216] guest clock: 1728681526.030608830
	I1011 21:18:46.051389   29617 fix.go:229] Guest: 2024-10-11 21:18:46.03060883 +0000 UTC Remote: 2024-10-11 21:18:45.932240932 +0000 UTC m=+149.654084325 (delta=98.367898ms)
	I1011 21:18:46.051403   29617 fix.go:200] guest clock delta is within tolerance: 98.367898ms
	I1011 21:18:46.051408   29617 start.go:83] releasing machines lock for "ha-610874-m03", held for 26.409503393s
	I1011 21:18:46.051425   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.051638   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:46.054103   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.054465   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.054484   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.056759   29617 out.go:177] * Found network options:
	I1011 21:18:46.058108   29617 out.go:177]   - NO_PROXY=192.168.39.10,192.168.39.11
	W1011 21:18:46.059377   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.059397   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.059412   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.059861   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060012   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060103   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:18:46.060140   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	W1011 21:18:46.060197   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.060218   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.060273   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:18:46.060291   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:46.062781   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063134   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063156   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063177   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063332   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063533   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.063672   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.063695   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063722   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063809   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063917   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.063937   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.064070   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.064193   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.315238   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:18:46.321537   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:18:46.321622   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:18:46.338777   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:18:46.338801   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:18:46.338861   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:18:46.354279   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:18:46.367905   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:18:46.367951   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:18:46.382395   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:18:46.395784   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:18:46.527698   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:18:46.689393   29617 docker.go:233] disabling docker service ...
	I1011 21:18:46.689462   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:18:46.704203   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:18:46.717422   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:18:46.835539   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:18:46.954100   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:18:46.969007   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:18:46.988391   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:18:46.988466   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:46.998736   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:18:46.998798   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.011000   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.020896   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.032139   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:18:47.042674   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.053148   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.070001   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.079898   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:18:47.089404   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:18:47.089464   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:18:47.101955   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:18:47.111372   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:47.225475   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:18:47.314226   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:18:47.314298   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:18:47.318974   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:18:47.319034   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:18:47.322683   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:18:47.363256   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:18:47.363346   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.390105   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.420312   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:18:47.421976   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:18:47.423450   29617 out.go:177]   - env NO_PROXY=192.168.39.10,192.168.39.11
	I1011 21:18:47.424609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:47.427015   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427408   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:47.427435   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427580   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:18:47.432290   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:47.445118   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:18:47.445341   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:47.445588   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.445623   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.460772   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I1011 21:18:47.461253   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.461758   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.461778   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.462071   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.462258   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:18:47.463800   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:47.464063   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.464094   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.478835   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1011 21:18:47.479190   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.479632   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.479653   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.479922   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.480090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:47.480267   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.222
	I1011 21:18:47.480276   29617 certs.go:194] generating shared ca certs ...
	I1011 21:18:47.480289   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.480440   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:18:47.480492   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:18:47.480504   29617 certs.go:256] generating profile certs ...
	I1011 21:18:47.480599   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:18:47.480632   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda
	I1011 21:18:47.480651   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.222 192.168.39.254]
	I1011 21:18:47.766344   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda ...
	I1011 21:18:47.766372   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda: {Name:mk781938e611c805d4d3614e2a3753b43a334879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766558   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda ...
	I1011 21:18:47.766576   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda: {Name:mk730a6176bc0314778375ee5435bf733e13e8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766701   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:18:47.766854   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:18:47.767020   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:18:47.767039   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:18:47.767069   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:18:47.767088   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:18:47.767105   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:18:47.767122   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:18:47.767138   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:18:47.767155   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:18:47.790727   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:18:47.790840   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:18:47.790890   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:18:47.790900   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:18:47.790934   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:18:47.790968   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:18:47.791002   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:18:47.791046   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:47.791074   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:18:47.791090   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:47.791103   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:18:47.791139   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:47.794048   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794490   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:47.794521   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794666   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:47.794865   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:47.795021   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:47.795166   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:47.874924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:18:47.879896   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:18:47.890508   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:18:47.894884   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:18:47.906444   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:18:47.911071   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:18:47.924640   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:18:47.929130   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:18:47.939543   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:18:47.943420   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:18:47.952418   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:18:47.956156   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:18:47.965542   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:18:47.990672   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:18:48.018655   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:18:48.046638   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:18:48.075087   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1011 21:18:48.099261   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1011 21:18:48.125316   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:18:48.150810   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:18:48.176240   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:18:48.202437   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:18:48.228304   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:18:48.250733   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:18:48.267330   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:18:48.284282   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:18:48.300414   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:18:48.317312   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:18:48.334266   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:18:48.350540   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:18:48.366454   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:18:48.371903   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:18:48.382259   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386521   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386558   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.392096   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:18:48.402476   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:18:48.414951   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420157   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420212   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.426147   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:18:48.437228   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:18:48.447706   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452447   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452490   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.457944   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:18:48.469558   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:18:48.473684   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:18:48.473727   29617 kubeadm.go:934] updating node {m03 192.168.39.222 8443 v1.31.1 crio true true} ...
	I1011 21:18:48.473800   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:18:48.473821   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:18:48.473848   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:18:48.489435   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:18:48.489512   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:18:48.489571   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.499111   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:18:48.499166   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:18:48.509200   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509211   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1011 21:18:48.509233   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509250   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509288   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509215   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:48.517849   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:18:48.517877   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:18:48.530466   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.530534   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:18:48.530551   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:18:48.530575   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.584347   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:18:48.584388   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:18:49.359545   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:18:49.369067   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1011 21:18:49.386375   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:18:49.402697   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:18:49.419546   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:18:49.424269   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:49.437035   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:49.561710   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:18:49.579907   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:49.580262   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:49.580306   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:49.596329   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I1011 21:18:49.596782   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:49.597244   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:49.597267   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:49.597574   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:49.597761   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:49.597902   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:18:49.598045   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:18:49.598061   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:49.601098   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601584   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:49.601613   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601735   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:49.601902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:49.602044   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:49.602182   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:49.765636   29617 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:49.765692   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I1011 21:19:12.027662   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (22.261919257s)
	I1011 21:19:12.027723   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:19:12.601287   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m03 minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:19:12.730357   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:19:12.852046   29617 start.go:319] duration metric: took 23.254138834s to joinCluster
	I1011 21:19:12.852173   29617 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:19:12.852553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:19:12.853928   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:19:12.855524   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:19:13.141318   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:19:13.175499   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:19:13.175739   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:19:13.175813   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:19:13.176040   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:13.176203   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.176216   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.176230   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.176236   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.180062   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:13.676530   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.676550   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.676559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.676563   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.680629   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.176763   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.176790   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.176802   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.176813   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.181595   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.676942   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.676962   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.676971   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.676974   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.680092   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.177198   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.177232   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.177243   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.177251   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.181013   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.181507   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:15.676949   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.676975   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.676985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.676991   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.680404   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.176381   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.176401   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.176411   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.176416   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.179611   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.676230   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.676253   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.676264   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.676269   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.679007   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.176965   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.176991   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.177003   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.177010   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.179578   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.677212   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.677239   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.677250   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.677257   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.680848   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:17.681529   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:18.176617   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.176642   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.176652   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.176657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.180501   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:18.676324   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.676344   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.676352   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.676356   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.680172   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:19.176785   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.176805   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.176813   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.176817   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.180917   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:19.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.676229   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.676239   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.676247   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.679537   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:20.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.176578   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.176586   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.176590   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.180852   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:20.181655   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:20.676981   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.677001   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.677010   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.677013   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.680773   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.176665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.176687   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.176695   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.176698   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.180326   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.677105   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.677131   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.677143   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.677150   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.680523   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:22.176275   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.176296   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.176305   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.176311   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.180665   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:22.181892   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:22.677209   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.677234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.677254   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.677260   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.680867   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.177040   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.177059   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.177067   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.177072   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.180354   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.676494   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.676523   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.676533   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.676539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.679890   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.177143   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.177165   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.177172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.177178   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.181118   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.182010   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:24.677149   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.677167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.677176   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.677179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.681310   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.176839   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.176861   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.176869   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.176875   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.181361   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.676226   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.676235   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.676238   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.679734   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.176896   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.176927   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.176938   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.176942   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.180665   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.676529   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.676556   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.676567   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.676574   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.679852   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.680538   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:27.176980   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.177000   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.177008   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.177011   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.180641   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:27.676837   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.676865   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.676876   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.676883   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.680097   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.177112   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.177134   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.177145   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.177152   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.180461   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.676318   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.676339   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.676347   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.676351   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.680275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.680843   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:29.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.176576   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.176584   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.176589   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.180006   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:29.676572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.676591   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.676601   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.676608   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.679885   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.176623   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.176647   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.176655   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.176660   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.180360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.676414   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.676442   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.676454   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.676462   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.679795   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.176596   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.176622   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.176632   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.176638   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.180174   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.180775   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:31.676625   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.676645   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.676653   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.676657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.679755   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.176832   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.176853   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.176861   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.176866   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.180709   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.676943   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.676966   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.676975   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.676979   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.680453   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.176289   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.176309   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.176317   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.176323   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179239   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:33.179746   29617 node_ready.go:49] node "ha-610874-m03" has status "Ready":"True"
	I1011 21:19:33.179763   29617 node_ready.go:38] duration metric: took 20.003708199s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:33.179771   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:33.179838   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:33.179846   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.179852   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179856   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.189958   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.199406   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.199502   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:19:33.199514   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.199523   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.199531   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.209887   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.210687   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.210702   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.210713   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.210717   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.217280   29617 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1011 21:19:33.217765   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.217784   29617 pod_ready.go:82] duration metric: took 18.353705ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217795   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217867   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:19:33.217877   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.217887   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.217892   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.223080   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:33.223812   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.223824   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.223831   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.223835   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.230872   29617 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1011 21:19:33.231311   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.231329   29617 pod_ready.go:82] duration metric: took 13.526998ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231340   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231407   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:19:33.231416   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.231425   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.231433   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.241511   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.242134   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.242152   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.242161   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.242167   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.246996   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.247556   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.247579   29617 pod_ready.go:82] duration metric: took 16.22432ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247588   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247649   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:19:33.247658   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.247665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.247671   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.251040   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.251793   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:33.251812   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.251824   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.251833   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.256535   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.256972   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.256988   29617 pod_ready.go:82] duration metric: took 9.394627ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.256997   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.377135   29617 request.go:632] Waited for 120.080186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377222   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.377244   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.377255   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.380444   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.576460   29617 request.go:632] Waited for 195.298391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576523   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576531   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.576540   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.576546   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.579942   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.580389   29617 pod_ready.go:93] pod "etcd-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.580410   29617 pod_ready.go:82] duration metric: took 323.407782ms for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.580426   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.776719   29617 request.go:632] Waited for 196.227093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776796   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776801   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.776812   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.776819   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.780183   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.977331   29617 request.go:632] Waited for 196.373167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977390   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.977408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.977414   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.980667   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.981324   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.981341   29617 pod_ready.go:82] duration metric: took 400.908426ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.981356   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.176801   29617 request.go:632] Waited for 195.389419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176872   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176878   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.176886   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.176893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.180626   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.376945   29617 request.go:632] Waited for 195.362412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377024   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377032   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.377039   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.377045   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.380705   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.381593   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.381610   29617 pod_ready.go:82] duration metric: took 400.248016ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.381621   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.576685   29617 request.go:632] Waited for 195.00587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576774   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576785   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.576796   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.576812   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.580220   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.776845   29617 request.go:632] Waited for 195.742935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776934   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776946   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.776957   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.776965   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.781975   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:34.782910   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.782934   29617 pod_ready.go:82] duration metric: took 401.305343ms for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.782947   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.976878   29617 request.go:632] Waited for 193.849735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976930   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976935   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.976942   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.976951   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.980959   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.176307   29617 request.go:632] Waited for 194.592291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176377   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176382   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.176391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.176396   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.180046   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.180744   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.180763   29617 pod_ready.go:82] duration metric: took 397.808243ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.180772   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.376823   29617 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376892   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376904   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.376914   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.376920   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.380896   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.577025   29617 request.go:632] Waited for 195.339459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577098   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577106   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.577113   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.577121   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.580479   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.581020   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.581044   29617 pod_ready.go:82] duration metric: took 400.264515ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.581060   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.777008   29617 request.go:632] Waited for 195.878722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777069   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777082   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.777104   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.777112   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.780597   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.976851   29617 request.go:632] Waited for 195.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976920   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976925   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.976934   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.976956   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.980563   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.981007   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.981026   29617 pod_ready.go:82] duration metric: took 399.955573ms for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.981036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.177077   29617 request.go:632] Waited for 195.967969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177157   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177162   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.177169   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.177174   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.181463   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:36.376692   29617 request.go:632] Waited for 194.268817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376745   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376750   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.376757   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.376762   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.379384   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.379856   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.379878   29617 pod_ready.go:82] duration metric: took 398.835564ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.379892   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.577313   29617 request.go:632] Waited for 197.342873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577431   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577448   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.577456   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.577460   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.580412   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.776616   29617 request.go:632] Waited for 195.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776706   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776717   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.776728   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.776737   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.779960   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:36.780383   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.780400   29617 pod_ready.go:82] duration metric: took 400.499984ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.780412   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.976358   29617 request.go:632] Waited for 195.870601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976432   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976449   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.976465   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.976472   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.979995   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.177111   29617 request.go:632] Waited for 196.357808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177162   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.177174   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.177179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.180267   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.181011   29617 pod_ready.go:93] pod "kube-proxy-cwzw4" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.181027   29617 pod_ready.go:82] duration metric: took 400.605186ms for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.181036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.377210   29617 request.go:632] Waited for 196.081343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377264   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377271   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.377281   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.377290   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.380963   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.577326   29617 request.go:632] Waited for 195.76133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577389   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.577404   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.577408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.580712   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.581178   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.581195   29617 pod_ready.go:82] duration metric: took 400.154079ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.581207   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.776648   29617 request.go:632] Waited for 195.355762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776752   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776766   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.776778   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.776782   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.779689   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:37.976673   29617 request.go:632] Waited for 196.375961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976747   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976758   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.976880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.976898   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.980426   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.981073   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.981096   29617 pod_ready.go:82] duration metric: took 399.882141ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.981108   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.177223   29617 request.go:632] Waited for 196.014293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177283   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177288   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.177296   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.177301   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.181281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.376306   29617 request.go:632] Waited for 194.28038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376394   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376403   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.376412   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.376419   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.379547   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.380029   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:38.380048   29617 pod_ready.go:82] duration metric: took 398.929633ms for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.380058   29617 pod_ready.go:39] duration metric: took 5.200277623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:38.380084   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:19:38.380134   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:19:38.400400   29617 api_server.go:72] duration metric: took 25.548169639s to wait for apiserver process to appear ...
	I1011 21:19:38.400421   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:19:38.400455   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:19:38.404896   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:19:38.404960   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:19:38.404973   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.404983   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.404989   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.405751   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:19:38.405814   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:19:38.405829   29617 api_server.go:131] duration metric: took 5.403218ms to wait for apiserver health ...
	I1011 21:19:38.405839   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:19:38.577234   29617 request.go:632] Waited for 171.320057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577302   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577307   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.577315   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.577319   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.583229   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.592399   29617 system_pods.go:59] 24 kube-system pods found
	I1011 21:19:38.592431   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.592436   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.592439   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.592442   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.592445   29617 system_pods.go:61] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.592448   29617 system_pods.go:61] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.592452   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.592455   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.592458   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.592461   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.592465   29617 system_pods.go:61] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.592468   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.592474   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.592477   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.592480   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.592484   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.592488   29617 system_pods.go:61] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.592493   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.592496   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.592499   29617 system_pods.go:61] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.592502   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.592511   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.592517   29617 system_pods.go:61] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.592521   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.592525   29617 system_pods.go:74] duration metric: took 186.682269ms to wait for pod list to return data ...
	I1011 21:19:38.592532   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:19:38.776788   29617 request.go:632] Waited for 184.17903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776850   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776857   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.776867   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.776874   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.780634   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.780764   29617 default_sa.go:45] found service account: "default"
	I1011 21:19:38.780782   29617 default_sa.go:55] duration metric: took 188.241369ms for default service account to be created ...
	I1011 21:19:38.780791   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:19:38.977229   29617 request.go:632] Waited for 196.374035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977314   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977326   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.977333   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.977339   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.983305   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.990701   29617 system_pods.go:86] 24 kube-system pods found
	I1011 21:19:38.990734   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.990743   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.990750   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.990756   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.990762   29617 system_pods.go:89] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.990769   29617 system_pods.go:89] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.990775   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.990782   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.990790   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.990800   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.990808   29617 system_pods.go:89] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.990818   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.990826   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.990835   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.990842   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.990849   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.990856   29617 system_pods.go:89] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.990866   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.990873   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.990880   29617 system_pods.go:89] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.990889   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.990896   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.990903   29617 system_pods.go:89] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.990910   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.990922   29617 system_pods.go:126] duration metric: took 210.12433ms to wait for k8s-apps to be running ...
	I1011 21:19:38.990936   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:19:38.991000   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:19:39.006368   29617 system_svc.go:56] duration metric: took 15.405995ms WaitForService to wait for kubelet
	I1011 21:19:39.006398   29617 kubeadm.go:582] duration metric: took 26.154169399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:19:39.006432   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:19:39.177139   29617 request.go:632] Waited for 170.58768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177204   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177210   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:39.177218   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:39.177226   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:39.180762   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:39.182158   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182186   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182210   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182214   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182219   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182222   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182225   29617 node_conditions.go:105] duration metric: took 175.788668ms to run NodePressure ...
	I1011 21:19:39.182235   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:19:39.182261   29617 start.go:255] writing updated cluster config ...
	I1011 21:19:39.182594   29617 ssh_runner.go:195] Run: rm -f paused
	I1011 21:19:39.238354   29617 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:19:39.241534   29617 out.go:177] * Done! kubectl is now configured to use "ha-610874" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:23:19 ha-610874 crio[662]: time="2024-10-11 21:23:19.961738424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799961713051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfa02937-6949-4839-938a-04e297735d80 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:19 ha-610874 crio[662]: time="2024-10-11 21:23:19.962544332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2e4d4ad-82ba-42db-8d56-a25dbe767d1c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:19 ha-610874 crio[662]: time="2024-10-11 21:23:19.962609127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2e4d4ad-82ba-42db-8d56-a25dbe767d1c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:19 ha-610874 crio[662]: time="2024-10-11 21:23:19.962924008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2e4d4ad-82ba-42db-8d56-a25dbe767d1c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.007917235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f11d13d9-04fb-4c0e-af7c-0375ae48311e name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.007990660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f11d13d9-04fb-4c0e-af7c-0375ae48311e name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.009516542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aee0bd62-d9d8-4704-b4aa-e6dd1f5d39d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.010050758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681800010023303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aee0bd62-d9d8-4704-b4aa-e6dd1f5d39d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.010631613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30e9b04d-9bce-4fcd-9385-490e1f93c0af name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.010683198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30e9b04d-9bce-4fcd-9385-490e1f93c0af name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.010899199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30e9b04d-9bce-4fcd-9385-490e1f93c0af name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.048817849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1079749f-79c7-4129-bf23-357945981e23 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.048897598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1079749f-79c7-4129-bf23-357945981e23 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.050323071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ab56682-2e5d-492e-9de5-c4dba549afc0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.050796785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681800050772006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ab56682-2e5d-492e-9de5-c4dba549afc0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.051591236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8df5ba5-338b-4266-b0db-f94c4e7c7d07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.051667565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8df5ba5-338b-4266-b0db-f94c4e7c7d07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.051892802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8df5ba5-338b-4266-b0db-f94c4e7c7d07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.094041460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fd64118-498a-4400-850f-4443f6769d96 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.094116130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fd64118-498a-4400-850f-4443f6769d96 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.095339464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e1276ab-41e8-4f80-821b-4e0ada4cec09 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.095785218Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681800095762574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e1276ab-41e8-4f80-821b-4e0ada4cec09 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.096387381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90b94124-98e8-4c19-9ec6-1cc08a44ca4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.096458804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90b94124-98e8-4c19-9ec6-1cc08a44ca4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:20 ha-610874 crio[662]: time="2024-10-11 21:23:20.096788050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90b94124-98e8-4c19-9ec6-1cc08a44ca4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a12e9c8cc5fc5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3d6c8146ac279       busybox-7dff88458-wdkxg
	add7da026dcc4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8079f4949344c       coredns-7c65d6cfc9-xdhdb
	f6f7910716598       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   bb1b1e2f66116       coredns-7c65d6cfc9-bhkxl
	01564ba5bc1e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5b0253d201393       storage-provisioner
	9d5b2015aad60       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   bc055170688e1       kindnet-pd7rn
	4af1bc183cfbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   9bb0d73fd8a6d       kube-proxy-4tqhn
	7009deb3ff5ef       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   343b700a511ad       kube-vip-ha-610874
	1bb0907534c8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9a96e5f0cd28a       kube-controller-manager-ha-610874
	093fe14b91d96       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   089d2c0589273       kube-scheduler-ha-610874
	b6a994e3f4bd9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6fbc98773bd42       kube-apiserver-ha-610874
	1cf13112be94f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   65e184a932364       etcd-ha-610874
	
	
	==> coredns [add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6] <==
	[INFO] 10.244.1.2:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143766s
	[INFO] 10.244.1.2:38119 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142587s
	[INFO] 10.244.1.2:40246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.002793445s
	[INFO] 10.244.1.2:46273 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000207574s
	[INFO] 10.244.0.4:51515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133463s
	[INFO] 10.244.0.4:34555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773084s
	[INFO] 10.244.0.4:56190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010851s
	[INFO] 10.244.0.4:35324 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114943s
	[INFO] 10.244.0.4:37261 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075619s
	[INFO] 10.244.2.2:33936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100657s
	[INFO] 10.244.2.2:47182 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000246779s
	[INFO] 10.244.1.2:44485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167961s
	[INFO] 10.244.1.2:46483 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141019s
	[INFO] 10.244.1.2:55464 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121351s
	[INFO] 10.244.0.4:47194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117616s
	[INFO] 10.244.0.4:49523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148468s
	[INFO] 10.244.0.4:45932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127987s
	[INFO] 10.244.0.4:49317 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075167s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169352s
	[INFO] 10.244.2.2:33809 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014751s
	[INFO] 10.244.2.2:44485 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176967s
	[INFO] 10.244.1.2:48359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011299s
	[INFO] 10.244.0.4:56947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140437s
	[INFO] 10.244.0.4:57754 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075899s
	[INFO] 10.244.0.4:59528 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091718s
	
	
	==> coredns [f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb] <==
	[INFO] 127.0.0.1:48153 - 48750 "HINFO IN 7219889624523006915.8528053042981959638. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015325438s
	[INFO] 10.244.2.2:47536 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.017241259s
	[INFO] 10.244.2.2:38591 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013641236s
	[INFO] 10.244.1.2:49949 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001322762s
	[INFO] 10.244.1.2:43849 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00009337s
	[INFO] 10.244.0.4:40246 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000070768s
	[INFO] 10.244.0.4:45808 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00140468s
	[INFO] 10.244.2.2:36598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219913s
	[INFO] 10.244.2.2:59970 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164371s
	[INFO] 10.244.2.2:54785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130909s
	[INFO] 10.244.1.2:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791262s
	[INFO] 10.244.1.2:49139 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158826s
	[INFO] 10.244.1.2:59870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00130207s
	[INFO] 10.244.1.2:48112 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127361s
	[INFO] 10.244.0.4:37981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152222s
	[INFO] 10.244.0.4:40975 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001145115s
	[INFO] 10.244.0.4:46746 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060695s
	[INFO] 10.244.2.2:60221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111051s
	[INFO] 10.244.2.2:45949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000966s
	[INFO] 10.244.1.2:51845 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131185s
	[INFO] 10.244.2.2:49925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140614s
	[INFO] 10.244.1.2:40749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139491s
	[INFO] 10.244.1.2:40058 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192557s
	[INFO] 10.244.1.2:36253 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154213s
	[INFO] 10.244.0.4:54354 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127201s
	
	
	==> describe nodes <==
	Name:               ha-610874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:16:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    ha-610874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cfe54b8903d4e3899113202463cdd3d
	  System UUID:                0cfe54b8-903d-4e38-9911-3202463cdd3d
	  Boot ID:                    afa53331-2d72-4daf-aead-d3b59f60fb23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wdkxg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-bhkxl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-xdhdb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-610874                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-pd7rn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-610874             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-610874    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-4tqhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-610874             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-610874                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-610874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-610874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-610874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  NodeReady                5m57s  kubelet          Node ha-610874 status is now: NodeReady
	  Normal  RegisteredNode           5m20s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  RegisteredNode           4m2s   node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	
	
	Name:               ha-610874-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:17:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:20:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-610874-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5e48fde498443da85ce03c51747b961
	  System UUID:                e5e48fde-4984-43da-85ce-03c51747b961
	  Boot ID:                    bf2f6504-4406-4797-b6e1-dc754be8ce6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pwg8s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-610874-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-xs5m6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-610874-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-610874-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-4bj7p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-610874-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-610874-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-610874-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-610874-m02 status is now: NodeNotReady
	
	
	Name:               ha-610874-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-610874-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1063a3d54d5d40c88a61db94380d3423
	  System UUID:                1063a3d5-4d5d-40c8-8a61-db94380d3423
	  Boot ID:                    ced9dc07-ccd1-4190-aae0-50f9a8bdae06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4sstr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-610874-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-2c774                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-610874-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-ha-610874-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-proxy-cwzw4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-610874-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-610874-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m11s                  cidrAllocator    Node ha-610874-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-610874-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	
	
	Name:               ha-610874-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_20_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-610874-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75d61525a70843b49a5efd4786a05869
	  System UUID:                75d61525-a708-43b4-9a5e-fd4786a05869
	  Boot ID:                    172ace10-e670-4373-a755-bb93871c28da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7dn76       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-vrd24    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-610874-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m1s                 cidrAllocator    Node ha-610874-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m                   node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-610874-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct11 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.855992] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543327] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581790] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.580104] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056339] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.193419] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.137869] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293941] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.956728] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.562630] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.064485] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.508464] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.090437] kauditd_printk_skb: 79 callbacks suppressed
	[Oct11 21:17] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.436722] kauditd_printk_skb: 29 callbacks suppressed
	[ +46.213407] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a] <==
	{"level":"warn","ts":"2024-10-11T21:23:20.205711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.281623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.314266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.371285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.381339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.385314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.389399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.398569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.408084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.424479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.432406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.446847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.461133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.478416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.483301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.500731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.526112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.530625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.534426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.537549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.541740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.551939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.559629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.560057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:20.581711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:23:20 up 7 min,  0 users,  load average: 0.39, 0.39, 0.21
	Linux ha-610874 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952] <==
	I1011 21:22:43.017396       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:22:53.015027       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:22:53.015083       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:22:53.015358       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:22:53.015384       1 main.go:300] handling current node
	I1011 21:22:53.015399       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:22:53.015404       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:22:53.015577       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:22:53.015599       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:03.016986       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:03.017143       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:03.017517       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:03.017599       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:03.017887       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:03.017926       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:03.018170       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:03.018292       1 main.go:300] handling current node
	I1011 21:23:13.008357       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:13.008403       1 main.go:300] handling current node
	I1011 21:23:13.008468       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:13.008474       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:13.008844       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:13.008922       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:13.009419       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:13.009448       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948] <==
	I1011 21:17:03.544827       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1011 21:17:03.633951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1011 21:17:53.070315       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.070829       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 84.644µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:17:53.072106       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.073324       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.074623       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.578549ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:10.074019       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.449µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:19:10.074013       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9bd8f8e8-8e91-4067-a12f-1ea2d8bd41c6"
	E1011 21:19:10.074068       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.809µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:45.881753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47690: use of closed network connection
	E1011 21:19:46.062184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47710: use of closed network connection
	E1011 21:19:46.253652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47728: use of closed network connection
	E1011 21:19:46.438494       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47750: use of closed network connection
	E1011 21:19:46.637537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47770: use of closed network connection
	E1011 21:19:46.815140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45184: use of closed network connection
	E1011 21:19:47.002661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45216: use of closed network connection
	E1011 21:19:47.179398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45236: use of closed network connection
	E1011 21:19:47.346528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45250: use of closed network connection
	E1011 21:19:47.638405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45264: use of closed network connection
	E1011 21:19:47.808669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45288: use of closed network connection
	E1011 21:19:47.977304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45304: use of closed network connection
	E1011 21:19:48.152762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45326: use of closed network connection
	E1011 21:19:48.324710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45346: use of closed network connection
	E1011 21:19:48.491718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45354: use of closed network connection
	
	
	==> kube-controller-manager [1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865] <==
	I1011 21:20:18.968008       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-610874-m04" podCIDRs=["10.244.3.0/24"]
	I1011 21:20:18.968119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.968257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.984966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:19.260924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.121280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.397093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.070457       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-610874-m04"
	I1011 21:20:23.072402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.132945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.420908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.568334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:29.120840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562762       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:20:39.580852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:40.377354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:49.215156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:21:38.097956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:21:38.098503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.132013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.234358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.800775ms"
	I1011 21:21:38.234458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.4µs"
	I1011 21:21:38.464262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:43.340055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	
	
	==> kube-proxy [4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 21:17:05.854510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 21:17:05.879022       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E1011 21:17:05.879501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 21:17:05.914134       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 21:17:05.914253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 21:17:05.914286       1 server_linux.go:169] "Using iptables Proxier"
	I1011 21:17:05.916891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 21:17:05.917757       1 server.go:483] "Version info" version="v1.31.1"
	I1011 21:17:05.917796       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:17:05.920479       1 config.go:199] "Starting service config controller"
	I1011 21:17:05.920740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 21:17:05.920939       1 config.go:105] "Starting endpoint slice config controller"
	I1011 21:17:05.920964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 21:17:05.921847       1 config.go:328] "Starting node config controller"
	I1011 21:17:05.921877       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 21:17:06.021605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 21:17:06.021672       1 shared_informer.go:320] Caches are synced for service config
	I1011 21:17:06.021955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94] <==
	W1011 21:16:56.914961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:56.914997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:56.955611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 21:16:56.955698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.100673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 21:16:57.100737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.117148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.117326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.263820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 21:16:57.264353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.296892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 21:16:57.297090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.359800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.360057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.555273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 21:16:57.555402       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 21:17:00.497419       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1011 21:20:19.054608       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7dn76" node="ha-610874-m04"
	E1011 21:20:19.055446       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-7dn76"
	E1011 21:20:19.188470       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dz2h8" node="ha-610874-m04"
	E1011 21:20:19.188552       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-dz2h8"
	E1011 21:20:19.193309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	E1011 21:20:19.195518       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f3a80da1-771c-458b-85ce-bff2b7759d1e(kube-system/kube-proxy-ht4ns) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ht4ns"
	E1011 21:20:19.195828       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" pod="kube-system/kube-proxy-ht4ns"
	I1011 21:20:19.196036       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	
	
	==> kubelet <==
	Oct 11 21:21:58 ha-610874 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:21:58 ha-610874 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:21:59 ha-610874 kubelet[1312]: E1011 21:21:59.036447    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681719036062418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:21:59 ha-610874 kubelet[1312]: E1011 21:21:59.036488    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681719036062418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038549    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038630    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040811    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040841    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.042974    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.043019    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044063    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044089    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045695    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045734    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:58 ha-610874 kubelet[1312]: E1011 21:22:58.943175    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.046933    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.047037    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049554    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049631    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.053671    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.054088    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-610874 -n ha-610874
helpers_test.go:261: (dbg) Run:  kubectl --context ha-610874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.400012579s)
ha_test.go:415: expected profile "ha-610874" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-610874\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-610874\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-610874\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.10\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.11\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.222\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.87\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt
\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",
\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-610874 -n ha-610874
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 logs -n 25: (1.347958547s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m03_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m04 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp testdata/cp-test.txt                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m03 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-610874 node stop m02 -v=7                                                     | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:16:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:16:16.315983   29617 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:16:16.316246   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316256   29617 out.go:358] Setting ErrFile to fd 2...
	I1011 21:16:16.316260   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316440   29617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:16:16.316986   29617 out.go:352] Setting JSON to false
	I1011 21:16:16.317794   29617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3521,"bootTime":1728677855,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:16:16.317891   29617 start.go:139] virtualization: kvm guest
	I1011 21:16:16.320541   29617 out.go:177] * [ha-610874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:16:16.321962   29617 notify.go:220] Checking for updates...
	I1011 21:16:16.321994   29617 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:16:16.323197   29617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:16:16.324431   29617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:16:16.325803   29617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.326998   29617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:16:16.328308   29617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:16:16.329813   29617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:16:16.364781   29617 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 21:16:16.366005   29617 start.go:297] selected driver: kvm2
	I1011 21:16:16.366018   29617 start.go:901] validating driver "kvm2" against <nil>
	I1011 21:16:16.366031   29617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:16:16.366752   29617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.366844   29617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:16:16.382125   29617 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:16:16.382207   29617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 21:16:16.382499   29617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:16:16.382537   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:16.382594   29617 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1011 21:16:16.382605   29617 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 21:16:16.382687   29617 start.go:340] cluster config:
	{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:16.382807   29617 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.384631   29617 out.go:177] * Starting "ha-610874" primary control-plane node in "ha-610874" cluster
	I1011 21:16:16.385929   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:16.385976   29617 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:16:16.385989   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:16:16.386070   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:16:16.386083   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:16:16.386381   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:16.386407   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json: {Name:mk126d2587705783f49cefd5532c6478d010ac07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:16.386555   29617 start.go:360] acquireMachinesLock for ha-610874: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:16:16.386593   29617 start.go:364] duration metric: took 23.105µs to acquireMachinesLock for "ha-610874"
	I1011 21:16:16.386631   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:16:16.386695   29617 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 21:16:16.388125   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:16:16.388266   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:16:16.388308   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:16:16.402198   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I1011 21:16:16.402701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:16:16.403193   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:16:16.403238   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:16:16.403629   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:16:16.403831   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:16.403987   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:16.404130   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:16:16.404153   29617 client.go:168] LocalClient.Create starting
	I1011 21:16:16.404179   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:16:16.404207   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404220   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404273   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:16:16.404296   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404309   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404323   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:16:16.404331   29617 main.go:141] libmachine: (ha-610874) Calling .PreCreateCheck
	I1011 21:16:16.404634   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:16.404967   29617 main.go:141] libmachine: Creating machine...
	I1011 21:16:16.404978   29617 main.go:141] libmachine: (ha-610874) Calling .Create
	I1011 21:16:16.405091   29617 main.go:141] libmachine: (ha-610874) Creating KVM machine...
	I1011 21:16:16.406548   29617 main.go:141] libmachine: (ha-610874) DBG | found existing default KVM network
	I1011 21:16:16.407330   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.407180   29640 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1011 21:16:16.407350   29617 main.go:141] libmachine: (ha-610874) DBG | created network xml: 
	I1011 21:16:16.407362   29617 main.go:141] libmachine: (ha-610874) DBG | <network>
	I1011 21:16:16.407369   29617 main.go:141] libmachine: (ha-610874) DBG |   <name>mk-ha-610874</name>
	I1011 21:16:16.407378   29617 main.go:141] libmachine: (ha-610874) DBG |   <dns enable='no'/>
	I1011 21:16:16.407386   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407396   29617 main.go:141] libmachine: (ha-610874) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1011 21:16:16.407401   29617 main.go:141] libmachine: (ha-610874) DBG |     <dhcp>
	I1011 21:16:16.407430   29617 main.go:141] libmachine: (ha-610874) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1011 21:16:16.407460   29617 main.go:141] libmachine: (ha-610874) DBG |     </dhcp>
	I1011 21:16:16.407476   29617 main.go:141] libmachine: (ha-610874) DBG |   </ip>
	I1011 21:16:16.407485   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407492   29617 main.go:141] libmachine: (ha-610874) DBG | </network>
	I1011 21:16:16.407498   29617 main.go:141] libmachine: (ha-610874) DBG | 
	I1011 21:16:16.412623   29617 main.go:141] libmachine: (ha-610874) DBG | trying to create private KVM network mk-ha-610874 192.168.39.0/24...
	I1011 21:16:16.475097   29617 main.go:141] libmachine: (ha-610874) DBG | private KVM network mk-ha-610874 192.168.39.0/24 created
	I1011 21:16:16.475123   29617 main.go:141] libmachine: (ha-610874) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.475147   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.475097   29640 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.475159   29617 main.go:141] libmachine: (ha-610874) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:16:16.475241   29617 main.go:141] libmachine: (ha-610874) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:16:16.729125   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.729005   29640 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa...
	I1011 21:16:16.910019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.909910   29640 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk...
	I1011 21:16:16.910047   29617 main.go:141] libmachine: (ha-610874) DBG | Writing magic tar header
	I1011 21:16:16.910056   29617 main.go:141] libmachine: (ha-610874) DBG | Writing SSH key tar header
	I1011 21:16:16.910063   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.910020   29640 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.910136   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874
	I1011 21:16:16.910176   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 (perms=drwx------)
	I1011 21:16:16.910191   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:16:16.910200   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:16:16.910207   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.910225   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:16:16.910242   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:16:16.910260   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:16:16.910277   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:16:16.910286   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:16:16.910293   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:16:16.910306   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:16.910328   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:16:16.910345   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home
	I1011 21:16:16.910356   29617 main.go:141] libmachine: (ha-610874) DBG | Skipping /home - not owner
	I1011 21:16:16.911372   29617 main.go:141] libmachine: (ha-610874) define libvirt domain using xml: 
	I1011 21:16:16.911391   29617 main.go:141] libmachine: (ha-610874) <domain type='kvm'>
	I1011 21:16:16.911398   29617 main.go:141] libmachine: (ha-610874)   <name>ha-610874</name>
	I1011 21:16:16.911402   29617 main.go:141] libmachine: (ha-610874)   <memory unit='MiB'>2200</memory>
	I1011 21:16:16.911407   29617 main.go:141] libmachine: (ha-610874)   <vcpu>2</vcpu>
	I1011 21:16:16.911412   29617 main.go:141] libmachine: (ha-610874)   <features>
	I1011 21:16:16.911418   29617 main.go:141] libmachine: (ha-610874)     <acpi/>
	I1011 21:16:16.911425   29617 main.go:141] libmachine: (ha-610874)     <apic/>
	I1011 21:16:16.911430   29617 main.go:141] libmachine: (ha-610874)     <pae/>
	I1011 21:16:16.911442   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911451   29617 main.go:141] libmachine: (ha-610874)   </features>
	I1011 21:16:16.911459   29617 main.go:141] libmachine: (ha-610874)   <cpu mode='host-passthrough'>
	I1011 21:16:16.911467   29617 main.go:141] libmachine: (ha-610874)   
	I1011 21:16:16.911473   29617 main.go:141] libmachine: (ha-610874)   </cpu>
	I1011 21:16:16.911479   29617 main.go:141] libmachine: (ha-610874)   <os>
	I1011 21:16:16.911484   29617 main.go:141] libmachine: (ha-610874)     <type>hvm</type>
	I1011 21:16:16.911489   29617 main.go:141] libmachine: (ha-610874)     <boot dev='cdrom'/>
	I1011 21:16:16.911492   29617 main.go:141] libmachine: (ha-610874)     <boot dev='hd'/>
	I1011 21:16:16.911498   29617 main.go:141] libmachine: (ha-610874)     <bootmenu enable='no'/>
	I1011 21:16:16.911504   29617 main.go:141] libmachine: (ha-610874)   </os>
	I1011 21:16:16.911510   29617 main.go:141] libmachine: (ha-610874)   <devices>
	I1011 21:16:16.911516   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='cdrom'>
	I1011 21:16:16.911532   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/boot2docker.iso'/>
	I1011 21:16:16.911547   29617 main.go:141] libmachine: (ha-610874)       <target dev='hdc' bus='scsi'/>
	I1011 21:16:16.911568   29617 main.go:141] libmachine: (ha-610874)       <readonly/>
	I1011 21:16:16.911586   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911596   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='disk'>
	I1011 21:16:16.911605   29617 main.go:141] libmachine: (ha-610874)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:16:16.911637   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk'/>
	I1011 21:16:16.911655   29617 main.go:141] libmachine: (ha-610874)       <target dev='hda' bus='virtio'/>
	I1011 21:16:16.911674   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911692   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911700   29617 main.go:141] libmachine: (ha-610874)       <source network='mk-ha-610874'/>
	I1011 21:16:16.911705   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911709   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911713   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911719   29617 main.go:141] libmachine: (ha-610874)       <source network='default'/>
	I1011 21:16:16.911726   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911730   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911736   29617 main.go:141] libmachine: (ha-610874)     <serial type='pty'>
	I1011 21:16:16.911741   29617 main.go:141] libmachine: (ha-610874)       <target port='0'/>
	I1011 21:16:16.911745   29617 main.go:141] libmachine: (ha-610874)     </serial>
	I1011 21:16:16.911751   29617 main.go:141] libmachine: (ha-610874)     <console type='pty'>
	I1011 21:16:16.911757   29617 main.go:141] libmachine: (ha-610874)       <target type='serial' port='0'/>
	I1011 21:16:16.911762   29617 main.go:141] libmachine: (ha-610874)     </console>
	I1011 21:16:16.911771   29617 main.go:141] libmachine: (ha-610874)     <rng model='virtio'>
	I1011 21:16:16.911795   29617 main.go:141] libmachine: (ha-610874)       <backend model='random'>/dev/random</backend>
	I1011 21:16:16.911810   29617 main.go:141] libmachine: (ha-610874)     </rng>
	I1011 21:16:16.911818   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911827   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911835   29617 main.go:141] libmachine: (ha-610874)   </devices>
	I1011 21:16:16.911844   29617 main.go:141] libmachine: (ha-610874) </domain>
	I1011 21:16:16.911853   29617 main.go:141] libmachine: (ha-610874) 
	I1011 21:16:16.916111   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:3e:bc:a1 in network default
	I1011 21:16:16.916699   29617 main.go:141] libmachine: (ha-610874) Ensuring networks are active...
	I1011 21:16:16.916720   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:16.917266   29617 main.go:141] libmachine: (ha-610874) Ensuring network default is active
	I1011 21:16:16.917528   29617 main.go:141] libmachine: (ha-610874) Ensuring network mk-ha-610874 is active
	I1011 21:16:16.918196   29617 main.go:141] libmachine: (ha-610874) Getting domain xml...
	I1011 21:16:16.918917   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:18.090043   29617 main.go:141] libmachine: (ha-610874) Waiting to get IP...
	I1011 21:16:18.090745   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.091141   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.091169   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.091121   29640 retry.go:31] will retry after 201.066044ms: waiting for machine to come up
	I1011 21:16:18.293473   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.293939   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.293961   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.293905   29640 retry.go:31] will retry after 378.868503ms: waiting for machine to come up
	I1011 21:16:18.674665   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.675080   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.675111   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.675034   29640 retry.go:31] will retry after 485.059913ms: waiting for machine to come up
	I1011 21:16:19.161402   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.161817   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.161841   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.161779   29640 retry.go:31] will retry after 597.34397ms: waiting for machine to come up
	I1011 21:16:19.760468   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.761020   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.761049   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.760968   29640 retry.go:31] will retry after 563.860814ms: waiting for machine to come up
	I1011 21:16:20.326631   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:20.326999   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:20.327019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:20.326975   29640 retry.go:31] will retry after 723.522472ms: waiting for machine to come up
	I1011 21:16:21.051775   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:21.052216   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:21.052252   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:21.052167   29640 retry.go:31] will retry after 1.08960891s: waiting for machine to come up
	I1011 21:16:22.142962   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:22.143401   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:22.143426   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:22.143368   29640 retry.go:31] will retry after 897.228253ms: waiting for machine to come up
	I1011 21:16:23.042418   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:23.042804   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:23.042830   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:23.042766   29640 retry.go:31] will retry after 1.598924345s: waiting for machine to come up
	I1011 21:16:24.643409   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:24.643801   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:24.643824   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:24.643752   29640 retry.go:31] will retry after 2.213754576s: waiting for machine to come up
	I1011 21:16:26.858883   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:26.859262   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:26.859288   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:26.859206   29640 retry.go:31] will retry after 2.657896821s: waiting for machine to come up
	I1011 21:16:29.518223   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:29.518660   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:29.518685   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:29.518604   29640 retry.go:31] will retry after 3.090933093s: waiting for machine to come up
	I1011 21:16:32.611083   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:32.611504   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:32.611526   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:32.611439   29640 retry.go:31] will retry after 4.256728144s: waiting for machine to come up
	I1011 21:16:36.869470   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.869869   29617 main.go:141] libmachine: (ha-610874) Found IP for machine: 192.168.39.10
	I1011 21:16:36.869889   29617 main.go:141] libmachine: (ha-610874) Reserving static IP address...
	I1011 21:16:36.869901   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has current primary IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.870189   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "ha-610874", mac: "52:54:00:5f:c7:da", ip: "192.168.39.10"} in network mk-ha-610874
	I1011 21:16:36.939387   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:36.939416   29617 main.go:141] libmachine: (ha-610874) Reserved static IP address: 192.168.39.10
	I1011 21:16:36.939452   29617 main.go:141] libmachine: (ha-610874) Waiting for SSH to be available...
	I1011 21:16:36.941715   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.941968   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874
	I1011 21:16:36.941981   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:5f:c7:da
	I1011 21:16:36.942096   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:36.942140   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:36.942184   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:36.942200   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:36.942220   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:36.945904   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:16:36.945918   29617 main.go:141] libmachine: (ha-610874) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:16:36.945924   29617 main.go:141] libmachine: (ha-610874) DBG | command : exit 0
	I1011 21:16:36.945937   29617 main.go:141] libmachine: (ha-610874) DBG | err     : exit status 255
	I1011 21:16:36.945943   29617 main.go:141] libmachine: (ha-610874) DBG | output  : 
	I1011 21:16:39.948099   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:39.950401   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950756   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:39.950783   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950892   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:39.950914   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:39.950953   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:39.950970   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:39.950994   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:40.078944   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: <nil>: 
	I1011 21:16:40.079215   29617 main.go:141] libmachine: (ha-610874) KVM machine creation complete!
	I1011 21:16:40.079553   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:40.080090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080284   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080465   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:16:40.080487   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:16:40.081981   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:16:40.081998   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:16:40.082006   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:16:40.082015   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.084298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084628   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.084651   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084818   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.084959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085094   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085224   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.085388   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.085639   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.085653   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:16:40.198146   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.198167   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:16:40.198175   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.200910   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201288   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.201309   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201507   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.201664   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.201836   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.202076   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.202254   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.202419   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.202429   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:16:40.320067   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:16:40.320126   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:16:40.320134   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:16:40.320143   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320383   29617 buildroot.go:166] provisioning hostname "ha-610874"
	I1011 21:16:40.320406   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320566   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.322841   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323123   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.323151   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323298   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.323462   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323604   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323710   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.323847   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.324007   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.324018   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874 && echo "ha-610874" | sudo tee /etc/hostname
	I1011 21:16:40.453038   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:16:40.453062   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.455945   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456318   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.456341   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456518   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.456721   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456849   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.457152   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.457380   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.457403   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:16:40.579667   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.579694   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:16:40.579712   29617 buildroot.go:174] setting up certificates
	I1011 21:16:40.579722   29617 provision.go:84] configureAuth start
	I1011 21:16:40.579730   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.579972   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:40.582609   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.582944   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.582970   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.583046   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.585314   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585630   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.585652   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585815   29617 provision.go:143] copyHostCerts
	I1011 21:16:40.585854   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585886   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:16:40.585905   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585976   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:16:40.586075   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586099   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:16:40.586109   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586148   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:16:40.586259   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586280   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:16:40.586286   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586312   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:16:40.586375   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874 san=[127.0.0.1 192.168.39.10 ha-610874 localhost minikube]
	I1011 21:16:40.739496   29617 provision.go:177] copyRemoteCerts
	I1011 21:16:40.739549   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:16:40.739572   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.742211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742512   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.742540   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742690   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.742858   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.743050   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.743333   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:40.830053   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:16:40.830129   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:16:40.854808   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:16:40.854871   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:16:40.878779   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:16:40.878844   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1011 21:16:40.903681   29617 provision.go:87] duration metric: took 323.94786ms to configureAuth
	I1011 21:16:40.903706   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:16:40.903876   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:16:40.903945   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.906420   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906781   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.906802   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906980   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.907177   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907312   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907417   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.907537   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.907709   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.907729   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:16:41.149826   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:16:41.149854   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:16:41.149864   29617 main.go:141] libmachine: (ha-610874) Calling .GetURL
	I1011 21:16:41.151110   29617 main.go:141] libmachine: (ha-610874) DBG | Using libvirt version 6000000
	I1011 21:16:41.153298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153626   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.153645   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153813   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:16:41.153832   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:16:41.153840   29617 client.go:171] duration metric: took 24.749677896s to LocalClient.Create
	I1011 21:16:41.153864   29617 start.go:167] duration metric: took 24.749734503s to libmachine.API.Create "ha-610874"
	I1011 21:16:41.153877   29617 start.go:293] postStartSetup for "ha-610874" (driver="kvm2")
	I1011 21:16:41.153888   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:16:41.153907   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.154134   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:16:41.154156   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.156353   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156731   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.156764   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.157060   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.157197   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.157377   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.245691   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:16:41.249882   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:16:41.249905   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:16:41.249959   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:16:41.250032   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:16:41.250041   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:16:41.250126   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:16:41.259595   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:41.283193   29617 start.go:296] duration metric: took 129.282074ms for postStartSetup
	I1011 21:16:41.283237   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:41.283845   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.286641   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.286965   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.286993   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.287545   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:41.287766   29617 start.go:128] duration metric: took 24.901059572s to createHost
	I1011 21:16:41.287798   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.290002   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290466   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.290494   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290571   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.290756   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.290937   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.291088   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.291234   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:41.291438   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:41.291450   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:16:41.403429   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681401.368525171
	
	I1011 21:16:41.403454   29617 fix.go:216] guest clock: 1728681401.368525171
	I1011 21:16:41.403464   29617 fix.go:229] Guest: 2024-10-11 21:16:41.368525171 +0000 UTC Remote: 2024-10-11 21:16:41.287784391 +0000 UTC m=+25.009627787 (delta=80.74078ms)
	I1011 21:16:41.403482   29617 fix.go:200] guest clock delta is within tolerance: 80.74078ms
	I1011 21:16:41.403487   29617 start.go:83] releasing machines lock for "ha-610874", held for 25.016883267s
	I1011 21:16:41.403504   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.403754   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.406243   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406536   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.406580   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406719   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407201   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407373   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407483   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:16:41.407533   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.407566   29617 ssh_runner.go:195] Run: cat /version.json
	I1011 21:16:41.407594   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.409924   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410186   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410232   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410307   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.410474   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.410626   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.410667   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410689   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410822   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.410885   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.411000   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.411159   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.411313   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.492040   29617 ssh_runner.go:195] Run: systemctl --version
	I1011 21:16:41.526227   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:16:41.684068   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:16:41.690188   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:16:41.690243   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:16:41.709475   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:16:41.709500   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:16:41.709563   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:16:41.725364   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:16:41.739326   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:16:41.739404   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:16:41.753640   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:16:41.767723   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:16:41.878060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:16:42.036051   29617 docker.go:233] disabling docker service ...
	I1011 21:16:42.036136   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:16:42.051987   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:16:42.065946   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:16:42.197199   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:16:42.333061   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:16:42.346878   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:16:42.365538   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:16:42.365592   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.375884   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:16:42.375943   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.386250   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.396765   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.407109   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:16:42.417549   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.427975   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.446147   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.456868   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:16:42.466165   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:16:42.466232   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:16:42.479799   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:16:42.489557   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:42.623905   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:16:42.716796   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:16:42.716871   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:16:42.721858   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:16:42.721918   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:16:42.725704   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:16:42.764981   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:16:42.765051   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.793072   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.822676   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:16:42.824024   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:42.826801   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827112   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:42.827137   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827350   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:16:42.831498   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:42.845346   29617 kubeadm.go:883] updating cluster {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:16:42.845519   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:42.845589   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:42.883957   29617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 21:16:42.884036   29617 ssh_runner.go:195] Run: which lz4
	I1011 21:16:42.888030   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1011 21:16:42.888109   29617 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 21:16:42.892241   29617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 21:16:42.892274   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 21:16:44.230363   29617 crio.go:462] duration metric: took 1.342272134s to copy over tarball
	I1011 21:16:44.230455   29617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 21:16:46.214291   29617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.983794178s)
	I1011 21:16:46.214315   29617 crio.go:469] duration metric: took 1.983922074s to extract the tarball
	I1011 21:16:46.214323   29617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 21:16:46.250833   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:46.298082   29617 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:16:46.298105   29617 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:16:46.298113   29617 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1011 21:16:46.298286   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:16:46.298384   29617 ssh_runner.go:195] Run: crio config
	I1011 21:16:46.343467   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:46.343493   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:46.343504   29617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:16:46.343528   29617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-610874 NodeName:ha-610874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:16:46.343703   29617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-610874"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:16:46.343730   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:16:46.343782   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:16:46.359672   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:16:46.359783   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:16:46.359850   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:16:46.370362   29617 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:16:46.370421   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1011 21:16:46.380573   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1011 21:16:46.396912   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:16:46.413759   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1011 21:16:46.430823   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1011 21:16:46.447531   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:16:46.451423   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:46.463809   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:46.584169   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:16:46.602286   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.10
	I1011 21:16:46.602304   29617 certs.go:194] generating shared ca certs ...
	I1011 21:16:46.602322   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.602467   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:16:46.602520   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:16:46.602533   29617 certs.go:256] generating profile certs ...
	I1011 21:16:46.602592   29617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:16:46.602638   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt with IP's: []
	I1011 21:16:46.782362   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt ...
	I1011 21:16:46.782395   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt: {Name:mk3593f4e91ffc0372a05bdad3e927ec316a91aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782596   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key ...
	I1011 21:16:46.782611   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key: {Name:mk9677876d62491747fdfd0e3f8d4776645d1f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782738   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7
	I1011 21:16:46.782756   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.254]
	I1011 21:16:47.380528   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 ...
	I1011 21:16:47.380560   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7: {Name:mk19e9d91179b46f9b03d4d9246179f41c3327ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380745   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 ...
	I1011 21:16:47.380776   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7: {Name:mk7fedd6c046987d5af851e2eed75ec367a33eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380872   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:16:47.380985   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:16:47.381067   29617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:16:47.381087   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt with IP's: []
	I1011 21:16:47.453906   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt ...
	I1011 21:16:47.453937   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt: {Name:mka90ed4c47ce0265f1b9da519124bd4fc73bbae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454114   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key ...
	I1011 21:16:47.454128   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key: {Name:mk47103fb5abe47f635456ba2a4ed9a69f678b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454230   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:16:47.454250   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:16:47.454266   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:16:47.454284   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:16:47.454303   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:16:47.454319   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:16:47.454335   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:16:47.454354   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:16:47.454417   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:16:47.454461   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:16:47.454473   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:16:47.454508   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:16:47.454543   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:16:47.454573   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:16:47.454648   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:47.454696   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.454719   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.454738   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.455273   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:16:47.481574   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:16:47.514683   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:16:47.538141   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:16:47.561021   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 21:16:47.585590   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:16:47.608816   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:16:47.632949   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:16:47.656849   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:16:47.680043   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:16:47.703417   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:16:47.726027   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:16:47.747378   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:16:47.754019   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:16:47.765407   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770565   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770631   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.776851   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:16:47.788126   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:16:47.799052   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803877   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803931   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.810054   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:16:47.821548   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:16:47.832817   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837775   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837829   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.843943   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:16:47.855398   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:16:47.859877   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:16:47.859928   29617 kubeadm.go:392] StartCluster: {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:47.860006   29617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:16:47.860081   29617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:16:47.903170   29617 cri.go:89] found id: ""
	I1011 21:16:47.903248   29617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 21:16:47.914400   29617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 21:16:47.924721   29617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 21:16:47.935673   29617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 21:16:47.935695   29617 kubeadm.go:157] found existing configuration files:
	
	I1011 21:16:47.935740   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 21:16:47.945454   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 21:16:47.945524   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 21:16:47.955440   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 21:16:47.964875   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 21:16:47.964944   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 21:16:47.974788   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.984258   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 21:16:47.984307   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.993726   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 21:16:48.002584   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 21:16:48.002650   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 21:16:48.012268   29617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 21:16:48.121155   29617 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 21:16:48.121351   29617 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 21:16:48.250203   29617 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 21:16:48.250314   29617 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 21:16:48.250452   29617 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 21:16:48.261245   29617 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 21:16:48.410718   29617 out.go:235]   - Generating certificates and keys ...
	I1011 21:16:48.410844   29617 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 21:16:48.410931   29617 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 21:16:48.542325   29617 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 21:16:48.608543   29617 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 21:16:48.797753   29617 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 21:16:48.873089   29617 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 21:16:49.070716   29617 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 21:16:49.071155   29617 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.372270   29617 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 21:16:49.372512   29617 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.423801   29617 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 21:16:49.655483   29617 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 21:16:49.724172   29617 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 21:16:49.724487   29617 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 21:16:50.017890   29617 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 21:16:50.285355   29617 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 21:16:50.392641   29617 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 21:16:50.748011   29617 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 21:16:50.984708   29617 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 21:16:50.985344   29617 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 21:16:50.988659   29617 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 21:16:50.990557   29617 out.go:235]   - Booting up control plane ...
	I1011 21:16:50.990675   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 21:16:50.990768   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 21:16:50.992112   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 21:16:51.010698   29617 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 21:16:51.019483   29617 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 21:16:51.019560   29617 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 21:16:51.165086   29617 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 21:16:51.165244   29617 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 21:16:51.666035   29617 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.408194ms
	I1011 21:16:51.666178   29617 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 21:16:58.166573   29617 kubeadm.go:310] [api-check] The API server is healthy after 6.502304408s
	I1011 21:16:58.179631   29617 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 21:16:58.195028   29617 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 21:16:58.220647   29617 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 21:16:58.220871   29617 kubeadm.go:310] [mark-control-plane] Marking the node ha-610874 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 21:16:58.236113   29617 kubeadm.go:310] [bootstrap-token] Using token: j1o64v.rjb74fe9bovjls5f
	I1011 21:16:58.237740   29617 out.go:235]   - Configuring RBAC rules ...
	I1011 21:16:58.237875   29617 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 21:16:58.245441   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 21:16:58.254162   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 21:16:58.259203   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 21:16:58.274345   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 21:16:58.278840   29617 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 21:16:58.578576   29617 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 21:16:59.008419   29617 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 21:16:59.573438   29617 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 21:16:59.574394   29617 kubeadm.go:310] 
	I1011 21:16:59.574519   29617 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 21:16:59.574537   29617 kubeadm.go:310] 
	I1011 21:16:59.574645   29617 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 21:16:59.574659   29617 kubeadm.go:310] 
	I1011 21:16:59.574685   29617 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 21:16:59.574753   29617 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 21:16:59.574825   29617 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 21:16:59.574836   29617 kubeadm.go:310] 
	I1011 21:16:59.574917   29617 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 21:16:59.574925   29617 kubeadm.go:310] 
	I1011 21:16:59.574988   29617 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 21:16:59.574998   29617 kubeadm.go:310] 
	I1011 21:16:59.575073   29617 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 21:16:59.575188   29617 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 21:16:59.575286   29617 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 21:16:59.575300   29617 kubeadm.go:310] 
	I1011 21:16:59.575406   29617 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 21:16:59.575519   29617 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 21:16:59.575533   29617 kubeadm.go:310] 
	I1011 21:16:59.575645   29617 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.575774   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 21:16:59.575812   29617 kubeadm.go:310] 	--control-plane 
	I1011 21:16:59.575825   29617 kubeadm.go:310] 
	I1011 21:16:59.575924   29617 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 21:16:59.575932   29617 kubeadm.go:310] 
	I1011 21:16:59.576044   29617 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.576197   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 21:16:59.576985   29617 kubeadm.go:310] W1011 21:16:48.086167     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577396   29617 kubeadm.go:310] W1011 21:16:48.087109     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577500   29617 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 21:16:59.577512   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:59.577520   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:59.579873   29617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 21:16:59.581130   29617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 21:16:59.586500   29617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 21:16:59.586517   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 21:16:59.606073   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 21:16:59.978632   29617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 21:16:59.978713   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:16:59.978732   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874 minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=true
	I1011 21:17:00.174706   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:00.174708   29617 ops.go:34] apiserver oom_adj: -16
	I1011 21:17:00.675693   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.174849   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.675518   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.174832   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.674899   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.174904   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.254520   29617 kubeadm.go:1113] duration metric: took 3.275873473s to wait for elevateKubeSystemPrivileges
	I1011 21:17:03.254557   29617 kubeadm.go:394] duration metric: took 15.394633584s to StartCluster
	I1011 21:17:03.254574   29617 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.254667   29617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.255426   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.255658   29617 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:03.255670   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 21:17:03.255683   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:17:03.255698   29617 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 21:17:03.255784   29617 addons.go:69] Setting storage-provisioner=true in profile "ha-610874"
	I1011 21:17:03.255803   29617 addons.go:234] Setting addon storage-provisioner=true in "ha-610874"
	I1011 21:17:03.255807   29617 addons.go:69] Setting default-storageclass=true in profile "ha-610874"
	I1011 21:17:03.255835   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.255840   29617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-610874"
	I1011 21:17:03.255868   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:03.256287   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256300   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.256367   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.271522   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I1011 21:17:03.271689   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I1011 21:17:03.272056   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272154   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272592   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272609   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272755   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272784   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272931   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273093   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.273112   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273524   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.273562   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.275146   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.275352   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 21:17:03.275763   29617 cert_rotation.go:140] Starting client certificate rotation controller
	I1011 21:17:03.275942   29617 addons.go:234] Setting addon default-storageclass=true in "ha-610874"
	I1011 21:17:03.275971   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.276303   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.276340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.288268   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1011 21:17:03.288701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.289186   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.289212   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.289573   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.289758   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.290984   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1011 21:17:03.291476   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.291798   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.292035   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.292052   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.292353   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.292786   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.292827   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.293969   29617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 21:17:03.295203   29617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.295223   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 21:17:03.295241   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.298221   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298669   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.298695   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298893   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.299039   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.299248   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.299371   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.307894   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33173
	I1011 21:17:03.308319   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.308780   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.308794   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.309115   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.309363   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.311112   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.311334   29617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.311352   29617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 21:17:03.311368   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.314487   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.314914   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.314938   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.315112   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.315274   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.315432   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.315580   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.390668   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 21:17:03.477039   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.523146   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.861068   29617 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1011 21:17:04.076843   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076867   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.076939   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076960   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077121   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077129   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077152   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077162   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077170   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077198   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077208   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077216   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077228   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077423   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077435   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077497   29617 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 21:17:04.077512   29617 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 21:17:04.077537   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077557   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077562   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077613   29617 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1011 21:17:04.077629   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.077640   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.077652   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.088649   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:04.089177   29617 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1011 21:17:04.089196   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.089204   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.089222   29617 round_trippers.go:473]     Content-Type: application/json
	I1011 21:17:04.089229   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.091300   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:04.091435   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.091450   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.091679   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.091716   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.091728   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.093543   29617 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1011 21:17:04.094783   29617 addons.go:510] duration metric: took 839.089678ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1011 21:17:04.094816   29617 start.go:246] waiting for cluster config update ...
	I1011 21:17:04.094834   29617 start.go:255] writing updated cluster config ...
	I1011 21:17:04.096346   29617 out.go:201] 
	I1011 21:17:04.097685   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:04.097746   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.099389   29617 out.go:177] * Starting "ha-610874-m02" control-plane node in "ha-610874" cluster
	I1011 21:17:04.100656   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:17:04.100673   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:17:04.100774   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:17:04.100788   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:17:04.100851   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.100998   29617 start.go:360] acquireMachinesLock for ha-610874-m02: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:17:04.101042   29617 start.go:364] duration metric: took 25.742µs to acquireMachinesLock for "ha-610874-m02"
	I1011 21:17:04.101063   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:04.101132   29617 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1011 21:17:04.102447   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:17:04.102519   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:04.102554   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:04.117018   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I1011 21:17:04.117574   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:04.118020   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:04.118046   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:04.118342   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:04.118495   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:04.118627   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:04.118734   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:17:04.118757   29617 client.go:168] LocalClient.Create starting
	I1011 21:17:04.118782   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:17:04.118814   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118825   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118865   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:17:04.118883   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118895   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118909   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:17:04.118916   29617 main.go:141] libmachine: (ha-610874-m02) Calling .PreCreateCheck
	I1011 21:17:04.119022   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:04.119344   29617 main.go:141] libmachine: Creating machine...
	I1011 21:17:04.119354   29617 main.go:141] libmachine: (ha-610874-m02) Calling .Create
	I1011 21:17:04.119448   29617 main.go:141] libmachine: (ha-610874-m02) Creating KVM machine...
	I1011 21:17:04.120553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing default KVM network
	I1011 21:17:04.120665   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing private KVM network mk-ha-610874
	I1011 21:17:04.120779   29617 main.go:141] libmachine: (ha-610874-m02) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.120796   29617 main.go:141] libmachine: (ha-610874-m02) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:17:04.120855   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.120779   29991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.120961   29617 main.go:141] libmachine: (ha-610874-m02) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:17:04.350121   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.350001   29991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa...
	I1011 21:17:04.441541   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441397   29991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk...
	I1011 21:17:04.441576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing magic tar header
	I1011 21:17:04.441591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing SSH key tar header
	I1011 21:17:04.441603   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441509   29991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.441619   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02
	I1011 21:17:04.441634   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:17:04.441650   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 (perms=drwx------)
	I1011 21:17:04.441661   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.441676   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:17:04.441687   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:17:04.441702   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:17:04.441718   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:17:04.441730   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:17:04.441739   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:17:04.441771   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:04.441793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:17:04.441805   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:17:04.441813   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home
	I1011 21:17:04.441826   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Skipping /home - not owner
	I1011 21:17:04.442818   29617 main.go:141] libmachine: (ha-610874-m02) define libvirt domain using xml: 
	I1011 21:17:04.442835   29617 main.go:141] libmachine: (ha-610874-m02) <domain type='kvm'>
	I1011 21:17:04.442851   29617 main.go:141] libmachine: (ha-610874-m02)   <name>ha-610874-m02</name>
	I1011 21:17:04.442859   29617 main.go:141] libmachine: (ha-610874-m02)   <memory unit='MiB'>2200</memory>
	I1011 21:17:04.442867   29617 main.go:141] libmachine: (ha-610874-m02)   <vcpu>2</vcpu>
	I1011 21:17:04.442876   29617 main.go:141] libmachine: (ha-610874-m02)   <features>
	I1011 21:17:04.442884   29617 main.go:141] libmachine: (ha-610874-m02)     <acpi/>
	I1011 21:17:04.442894   29617 main.go:141] libmachine: (ha-610874-m02)     <apic/>
	I1011 21:17:04.442901   29617 main.go:141] libmachine: (ha-610874-m02)     <pae/>
	I1011 21:17:04.442909   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.442916   29617 main.go:141] libmachine: (ha-610874-m02)   </features>
	I1011 21:17:04.442924   29617 main.go:141] libmachine: (ha-610874-m02)   <cpu mode='host-passthrough'>
	I1011 21:17:04.442929   29617 main.go:141] libmachine: (ha-610874-m02)   
	I1011 21:17:04.442935   29617 main.go:141] libmachine: (ha-610874-m02)   </cpu>
	I1011 21:17:04.442940   29617 main.go:141] libmachine: (ha-610874-m02)   <os>
	I1011 21:17:04.442944   29617 main.go:141] libmachine: (ha-610874-m02)     <type>hvm</type>
	I1011 21:17:04.442949   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='cdrom'/>
	I1011 21:17:04.442953   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='hd'/>
	I1011 21:17:04.442958   29617 main.go:141] libmachine: (ha-610874-m02)     <bootmenu enable='no'/>
	I1011 21:17:04.442966   29617 main.go:141] libmachine: (ha-610874-m02)   </os>
	I1011 21:17:04.442970   29617 main.go:141] libmachine: (ha-610874-m02)   <devices>
	I1011 21:17:04.442975   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='cdrom'>
	I1011 21:17:04.442982   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/boot2docker.iso'/>
	I1011 21:17:04.442988   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hdc' bus='scsi'/>
	I1011 21:17:04.442992   29617 main.go:141] libmachine: (ha-610874-m02)       <readonly/>
	I1011 21:17:04.442999   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443009   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='disk'>
	I1011 21:17:04.443018   29617 main.go:141] libmachine: (ha-610874-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:17:04.443028   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk'/>
	I1011 21:17:04.443033   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hda' bus='virtio'/>
	I1011 21:17:04.443037   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443042   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443047   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='mk-ha-610874'/>
	I1011 21:17:04.443052   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443057   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443061   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443066   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='default'/>
	I1011 21:17:04.443071   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443076   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443080   29617 main.go:141] libmachine: (ha-610874-m02)     <serial type='pty'>
	I1011 21:17:04.443085   29617 main.go:141] libmachine: (ha-610874-m02)       <target port='0'/>
	I1011 21:17:04.443089   29617 main.go:141] libmachine: (ha-610874-m02)     </serial>
	I1011 21:17:04.443094   29617 main.go:141] libmachine: (ha-610874-m02)     <console type='pty'>
	I1011 21:17:04.443099   29617 main.go:141] libmachine: (ha-610874-m02)       <target type='serial' port='0'/>
	I1011 21:17:04.443103   29617 main.go:141] libmachine: (ha-610874-m02)     </console>
	I1011 21:17:04.443109   29617 main.go:141] libmachine: (ha-610874-m02)     <rng model='virtio'>
	I1011 21:17:04.443137   29617 main.go:141] libmachine: (ha-610874-m02)       <backend model='random'>/dev/random</backend>
	I1011 21:17:04.443157   29617 main.go:141] libmachine: (ha-610874-m02)     </rng>
	I1011 21:17:04.443167   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443173   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443189   29617 main.go:141] libmachine: (ha-610874-m02)   </devices>
	I1011 21:17:04.443198   29617 main.go:141] libmachine: (ha-610874-m02) </domain>
	I1011 21:17:04.443208   29617 main.go:141] libmachine: (ha-610874-m02) 
	I1011 21:17:04.449596   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f0:af:4d in network default
	I1011 21:17:04.450115   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring networks are active...
	I1011 21:17:04.450137   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:04.450871   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network default is active
	I1011 21:17:04.451172   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network mk-ha-610874 is active
	I1011 21:17:04.451696   29617 main.go:141] libmachine: (ha-610874-m02) Getting domain xml...
	I1011 21:17:04.452466   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:05.723228   29617 main.go:141] libmachine: (ha-610874-m02) Waiting to get IP...
	I1011 21:17:05.723997   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.724437   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.724489   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.724421   29991 retry.go:31] will retry after 216.617717ms: waiting for machine to come up
	I1011 21:17:05.943023   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.943470   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.943493   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.943418   29991 retry.go:31] will retry after 323.475706ms: waiting for machine to come up
	I1011 21:17:06.268759   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.269130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.269185   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.269071   29991 retry.go:31] will retry after 341.815784ms: waiting for machine to come up
	I1011 21:17:06.612587   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.613044   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.613069   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.612994   29991 retry.go:31] will retry after 575.567056ms: waiting for machine to come up
	I1011 21:17:07.189626   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.190024   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.190052   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.189979   29991 retry.go:31] will retry after 508.01524ms: waiting for machine to come up
	I1011 21:17:07.699512   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.699870   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.699896   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.699824   29991 retry.go:31] will retry after 706.438375ms: waiting for machine to come up
	I1011 21:17:08.408130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:08.408534   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:08.408553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:08.408491   29991 retry.go:31] will retry after 819.845939ms: waiting for machine to come up
	I1011 21:17:09.229809   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:09.230337   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:09.230361   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:09.230274   29991 retry.go:31] will retry after 1.08916769s: waiting for machine to come up
	I1011 21:17:10.320875   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:10.321316   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:10.321344   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:10.321274   29991 retry.go:31] will retry after 1.825013226s: waiting for machine to come up
	I1011 21:17:12.148418   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:12.148892   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:12.148912   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:12.148854   29991 retry.go:31] will retry after 1.911054739s: waiting for machine to come up
	I1011 21:17:14.062931   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:14.063353   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:14.063381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:14.063300   29991 retry.go:31] will retry after 2.512289875s: waiting for machine to come up
	I1011 21:17:16.577169   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:16.577555   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:16.577580   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:16.577519   29991 retry.go:31] will retry after 3.376491238s: waiting for machine to come up
	I1011 21:17:19.955606   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:19.955968   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:19.955995   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:19.955923   29991 retry.go:31] will retry after 4.049589987s: waiting for machine to come up
	I1011 21:17:24.010143   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010574   29617 main.go:141] libmachine: (ha-610874-m02) Found IP for machine: 192.168.39.11
	I1011 21:17:24.010593   29617 main.go:141] libmachine: (ha-610874-m02) Reserving static IP address...
	I1011 21:17:24.010602   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010971   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "ha-610874-m02", mac: "52:54:00:f3:cf:5a", ip: "192.168.39.11"} in network mk-ha-610874
	I1011 21:17:24.079043   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:24.079077   29617 main.go:141] libmachine: (ha-610874-m02) Reserved static IP address: 192.168.39.11
	I1011 21:17:24.079093   29617 main.go:141] libmachine: (ha-610874-m02) Waiting for SSH to be available...
	I1011 21:17:24.081543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.081867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874
	I1011 21:17:24.081880   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:f3:cf:5a
	I1011 21:17:24.082047   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:24.082076   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:24.082376   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:24.082572   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:24.082591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:24.086567   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:17:24.086597   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:17:24.086608   29617 main.go:141] libmachine: (ha-610874-m02) DBG | command : exit 0
	I1011 21:17:24.086627   29617 main.go:141] libmachine: (ha-610874-m02) DBG | err     : exit status 255
	I1011 21:17:24.086641   29617 main.go:141] libmachine: (ha-610874-m02) DBG | output  : 
	I1011 21:17:27.089089   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:27.091628   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.091976   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.092001   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.092162   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:27.092189   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:27.092213   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:27.092221   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:27.092230   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:27.218963   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: <nil>: 
	I1011 21:17:27.219245   29617 main.go:141] libmachine: (ha-610874-m02) KVM machine creation complete!
	I1011 21:17:27.219616   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:27.220149   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220344   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220511   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:17:27.220532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetState
	I1011 21:17:27.221755   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:17:27.221770   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:17:27.221778   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:17:27.221786   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.223867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224229   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.224267   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224374   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.224532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224655   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224768   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.224964   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.225164   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.225177   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:17:27.333813   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.333841   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:17:27.333852   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.336538   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.336885   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.336909   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.337071   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.337262   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337411   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337545   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.337696   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.337866   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.337878   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:17:27.447511   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:17:27.447576   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:17:27.447583   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:17:27.447590   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.447842   29617 buildroot.go:166] provisioning hostname "ha-610874-m02"
	I1011 21:17:27.447866   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.448033   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.450381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450763   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.450793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450924   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.451086   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451309   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451419   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.451547   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.451737   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.451749   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m02 && echo "ha-610874-m02" | sudo tee /etc/hostname
	I1011 21:17:27.572801   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m02
	
	I1011 21:17:27.572834   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.575352   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575751   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.575776   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575941   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.576093   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576220   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576346   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.576461   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.576637   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.576661   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:17:27.695886   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.695916   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:17:27.695938   29617 buildroot.go:174] setting up certificates
	I1011 21:17:27.695952   29617 provision.go:84] configureAuth start
	I1011 21:17:27.695968   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.696239   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:27.698924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699311   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.699342   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699459   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.701614   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.701924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.701942   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.702129   29617 provision.go:143] copyHostCerts
	I1011 21:17:27.702158   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702190   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:17:27.702199   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702263   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:17:27.702355   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702381   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:17:27.702389   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702438   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:17:27.702535   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702560   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:17:27.702567   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702604   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:17:27.702691   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m02 san=[127.0.0.1 192.168.39.11 ha-610874-m02 localhost minikube]
	I1011 21:17:27.916455   29617 provision.go:177] copyRemoteCerts
	I1011 21:17:27.916517   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:17:27.916546   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.919220   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919586   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.919612   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919767   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.919931   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.920084   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.920214   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.005137   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:17:28.005206   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:17:28.030798   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:17:28.030868   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:17:28.053929   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:17:28.053992   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:17:28.077344   29617 provision.go:87] duration metric: took 381.381213ms to configureAuth
	I1011 21:17:28.077368   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:17:28.077553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:28.077631   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.079998   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080363   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.080391   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080550   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.080711   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080860   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080957   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.081126   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.081276   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.081289   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:17:28.305072   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:17:28.305099   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:17:28.305107   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetURL
	I1011 21:17:28.306348   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using libvirt version 6000000
	I1011 21:17:28.308766   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309119   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.309148   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309322   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:17:28.309336   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:17:28.309345   29617 client.go:171] duration metric: took 24.190578436s to LocalClient.Create
	I1011 21:17:28.309369   29617 start.go:167] duration metric: took 24.190632715s to libmachine.API.Create "ha-610874"
	I1011 21:17:28.309380   29617 start.go:293] postStartSetup for "ha-610874-m02" (driver="kvm2")
	I1011 21:17:28.309393   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:17:28.309414   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.309649   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:17:28.309678   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.311900   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312234   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.312257   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312366   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.312513   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.312670   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.312813   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.401258   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:17:28.405713   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:17:28.405741   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:17:28.405819   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:17:28.405893   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:17:28.405901   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:17:28.405976   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:17:28.415792   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:28.439288   29617 start.go:296] duration metric: took 129.894011ms for postStartSetup
	I1011 21:17:28.439338   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:28.439884   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.442343   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442733   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.442761   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442929   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:28.443099   29617 start.go:128] duration metric: took 24.341953324s to createHost
	I1011 21:17:28.443119   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.445585   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.445871   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.445894   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.446037   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.446185   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446313   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446509   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.446712   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.446859   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.446869   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:17:28.555655   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681448.532334020
	
	I1011 21:17:28.555684   29617 fix.go:216] guest clock: 1728681448.532334020
	I1011 21:17:28.555698   29617 fix.go:229] Guest: 2024-10-11 21:17:28.53233402 +0000 UTC Remote: 2024-10-11 21:17:28.443109707 +0000 UTC m=+72.164953096 (delta=89.224313ms)
	I1011 21:17:28.555717   29617 fix.go:200] guest clock delta is within tolerance: 89.224313ms
	I1011 21:17:28.555723   29617 start.go:83] releasing machines lock for "ha-610874-m02", held for 24.454670186s
	I1011 21:17:28.555747   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.555979   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.558215   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.558576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.558610   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.560996   29617 out.go:177] * Found network options:
	I1011 21:17:28.562345   29617 out.go:177]   - NO_PROXY=192.168.39.10
	W1011 21:17:28.563437   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.563463   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.563914   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564081   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564167   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:17:28.564198   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	W1011 21:17:28.564293   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.564371   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:17:28.564394   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.566543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566887   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.566920   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566948   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567066   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567235   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567341   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.567349   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567359   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567462   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.567515   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567649   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567774   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567889   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.804794   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:17:28.816172   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:17:28.816234   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:17:28.833684   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:17:28.833707   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:17:28.833785   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:17:28.850682   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:17:28.865268   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:17:28.865314   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:17:28.879804   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:17:28.893790   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:17:29.005060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:17:29.161552   29617 docker.go:233] disabling docker service ...
	I1011 21:17:29.161623   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:17:29.176030   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:17:29.188905   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:17:29.314012   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:17:29.444969   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:17:29.458929   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:17:29.477279   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:17:29.477336   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.487485   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:17:29.487557   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.497725   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.508074   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.518078   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:17:29.528405   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.538441   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.555119   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.568308   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:17:29.578239   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:17:29.578297   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:17:29.591777   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:17:29.601766   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:29.733693   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:17:29.832686   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:17:29.832769   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:17:29.837474   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:17:29.837531   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:17:29.841328   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:17:29.885910   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:17:29.885997   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.915959   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.947445   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:17:29.948743   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:17:29.949776   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:29.952438   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952742   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:29.952767   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952926   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:17:29.957045   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:29.969401   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:17:29.969618   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:29.969904   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.969953   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:29.984875   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I1011 21:17:29.985308   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:29.985749   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:29.985772   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:29.986088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:29.986307   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:29.987951   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:29.988270   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.988309   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:30.002903   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I1011 21:17:30.003325   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:30.003771   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:30.003791   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:30.004088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:30.004322   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:30.004478   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.11
	I1011 21:17:30.004490   29617 certs.go:194] generating shared ca certs ...
	I1011 21:17:30.004507   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.004645   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:17:30.004706   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:17:30.004720   29617 certs.go:256] generating profile certs ...
	I1011 21:17:30.004812   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:17:30.004845   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a
	I1011 21:17:30.004865   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.254]
	I1011 21:17:30.068798   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a ...
	I1011 21:17:30.068829   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a: {Name:mk7e577273a37f1215e925a89aaf2057d9d70c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069010   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a ...
	I1011 21:17:30.069026   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a: {Name:mk272cb1eed2069075ccbf59f795f6618abcd353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069135   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:17:30.069298   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:17:30.069453   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:17:30.069470   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:17:30.069497   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:17:30.069514   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:17:30.069533   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:17:30.069553   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:17:30.069571   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:17:30.069589   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:17:30.069614   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:17:30.069674   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:17:30.069714   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:17:30.069727   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:17:30.069761   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:17:30.069795   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:17:30.069830   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:17:30.069888   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:30.069930   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.069950   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.069968   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.070008   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:30.073028   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073411   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:30.073439   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073677   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:30.073887   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:30.074102   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:30.074339   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:30.150977   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:17:30.155841   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:17:30.167973   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:17:30.172398   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:17:30.183178   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:17:30.187494   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:17:30.198396   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:17:30.202690   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:17:30.213924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:17:30.218228   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:17:30.229999   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:17:30.234409   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:17:30.246054   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:17:30.271630   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:17:30.295598   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:17:30.320158   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:17:30.346169   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1011 21:17:30.370669   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 21:17:30.396095   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:17:30.424361   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:17:30.449179   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:17:30.473592   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:17:30.497140   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:17:30.520773   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:17:30.537475   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:17:30.553696   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:17:30.573515   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:17:30.591050   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:17:30.607456   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:17:30.623663   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:17:30.639999   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:17:30.645863   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:17:30.656839   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661661   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661737   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.667927   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:17:30.678586   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:17:30.690465   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695106   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695178   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.700843   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:17:30.711530   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:17:30.722262   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726883   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726930   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.732484   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:17:30.743130   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:17:30.747324   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:17:30.747378   29617 kubeadm.go:934] updating node {m02 192.168.39.11 8443 v1.31.1 crio true true} ...
	I1011 21:17:30.747471   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:17:30.747503   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:17:30.747550   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:17:30.764827   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:17:30.764898   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:17:30.764958   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.774946   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:17:30.775004   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.785084   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:17:30.785115   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785173   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785210   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1011 21:17:30.785254   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1011 21:17:30.789999   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:17:30.790028   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:17:31.801070   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.801149   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.806312   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:17:31.806341   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:17:31.977093   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:17:32.035477   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.035590   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.049208   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:17:32.049241   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:17:32.383282   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:17:32.393090   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1011 21:17:32.409524   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:17:32.426347   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:17:32.443202   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:17:32.447193   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:32.459719   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:32.593682   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:32.611619   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:32.611941   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:32.611988   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:32.626650   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1011 21:17:32.627104   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:32.627665   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:32.627681   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:32.627997   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:32.628209   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:32.628355   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:17:32.628464   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:17:32.628490   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:32.631170   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631565   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:32.631594   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631751   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:32.631931   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:32.632068   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:32.632206   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:32.785858   29617 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:32.785905   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443"
	I1011 21:17:54.047983   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443": (21.262048482s)
	I1011 21:17:54.048020   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:17:54.524404   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m02 minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:17:54.662523   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:17:54.782630   29617 start.go:319] duration metric: took 22.154260063s to joinCluster
	I1011 21:17:54.782703   29617 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:54.782988   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:54.784979   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:17:54.786144   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:55.109738   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:55.128457   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:55.128804   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:17:55.128882   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:17:55.129129   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:17:55.129231   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.129241   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.129252   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.129258   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.140234   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:55.629803   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.629830   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.629841   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.629847   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.633275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.129516   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.129541   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.129552   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.129559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.132902   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.629511   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.629534   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.634698   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:17:57.129572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.129597   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.129605   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.129609   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.132668   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:57.133230   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:57.629639   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.629659   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.629667   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.629670   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.632880   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:58.129393   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.129417   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.129441   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.129446   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.132403   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:58.629999   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.630018   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.630026   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.630030   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.633746   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.130079   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.130096   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.130104   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.130108   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.133281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.133973   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:59.629323   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.629347   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.629358   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.629364   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.632796   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.129728   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.129749   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.129758   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.129767   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.133151   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.629977   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.630003   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.630015   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.630021   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.633099   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.130138   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.130160   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.130171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.130182   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.133307   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.134143   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:01.630135   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.630158   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.630171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.630177   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.634516   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:02.129957   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.129977   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.129985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.129990   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.209108   29617 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I1011 21:18:02.630223   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.630241   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.630249   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.630254   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.633360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:03.130145   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.130165   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.130172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.130176   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.134521   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:03.135482   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:03.630325   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.630348   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.630357   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.630363   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.633906   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.129848   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.129869   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.129880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.129885   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.133353   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.630352   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.630378   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.630391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.630395   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.633784   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:05.129622   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.129647   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.129658   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.129664   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.174718   29617 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1011 21:18:05.175206   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:05.629573   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.629601   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.629610   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.629614   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.633377   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.129366   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.129388   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.129396   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.129399   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.132592   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.630152   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.630174   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.630184   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.630190   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.633604   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.130251   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.130280   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.130292   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.130299   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.133640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.629546   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.629568   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.629578   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.629583   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.632932   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.633891   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:08.129786   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.129803   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.129811   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.129815   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.133290   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:08.629506   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.629533   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.633075   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.129541   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.129559   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.129567   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.129572   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.132640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.629665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.629684   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.629692   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.629697   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.632858   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:10.129866   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.129885   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.129893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.129897   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.132615   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:10.133150   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:10.629443   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.629475   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.629489   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.629493   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.632970   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.130002   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.130024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.130032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.130035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.133677   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.629439   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.629465   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.629477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.629482   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.632816   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.130049   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.130071   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.130080   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.130083   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.133179   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.133716   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:12.630085   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.630110   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.630121   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.630127   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.633114   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:13.130226   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.130245   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.130253   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.130258   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.133707   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:13.629976   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.630005   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.630016   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.630022   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.633601   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.129823   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.129846   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.129857   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.129863   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.132927   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.630032   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.630053   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.630062   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.630070   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.633208   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.633750   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:15.129885   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.129909   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.129919   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.129924   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.132958   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.630000   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.630024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.630032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.630035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.632986   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.633633   29617 node_ready.go:49] node "ha-610874-m02" has status "Ready":"True"
	I1011 21:18:15.633647   29617 node_ready.go:38] duration metric: took 20.504503338s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:18:15.633655   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:15.633709   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:15.633718   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.633724   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.633728   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.637582   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.643886   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.643972   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:18:15.643983   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.643993   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.643999   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.646763   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.647514   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.647529   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.647536   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.647539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.649945   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.650586   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.650602   29617 pod_ready.go:82] duration metric: took 6.694777ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650623   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650679   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:18:15.650688   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.650699   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.650707   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.652943   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.653673   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.653687   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.653696   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.653701   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.655886   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.656382   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.656397   29617 pod_ready.go:82] duration metric: took 5.765488ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656405   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656451   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:18:15.656461   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.656471   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.656477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.658729   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.659391   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.659409   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.659419   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.659426   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.661629   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.662114   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.662130   29617 pod_ready.go:82] duration metric: took 5.719352ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662137   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662181   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:18:15.662190   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.662197   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.662201   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.664800   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.665273   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.665286   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.665294   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.665298   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.667272   29617 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1011 21:18:15.667736   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.667757   29617 pod_ready.go:82] duration metric: took 5.613486ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.667773   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.830074   29617 request.go:632] Waited for 162.243136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830160   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830168   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.830178   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.830188   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.833590   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.030666   29617 request.go:632] Waited for 196.378996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030722   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030728   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.030735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.030739   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.033962   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.034580   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.034599   29617 pod_ready.go:82] duration metric: took 366.81416ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.034608   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.230672   29617 request.go:632] Waited for 195.982779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230778   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230790   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.230801   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.230810   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.234030   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.430609   29617 request.go:632] Waited for 195.69013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430701   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430712   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.430735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.433742   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.434219   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.434239   29617 pod_ready.go:82] duration metric: took 399.609699ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.434252   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.630260   29617 request.go:632] Waited for 195.941074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630337   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630342   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.630350   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.630357   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.633657   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.830752   29617 request.go:632] Waited for 196.369395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830804   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830811   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.830820   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.830827   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.833807   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.834437   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.834455   29617 pod_ready.go:82] duration metric: took 400.195609ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.834465   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.030516   29617 request.go:632] Waited for 195.993213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030589   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030595   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.030607   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.030627   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.034122   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.230257   29617 request.go:632] Waited for 195.302255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230322   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230329   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.230337   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.230342   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.233560   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.234217   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.234239   29617 pod_ready.go:82] duration metric: took 399.767293ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.234256   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.430433   29617 request.go:632] Waited for 196.107897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430509   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430515   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.430526   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.430534   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.434262   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.630356   29617 request.go:632] Waited for 195.345057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630426   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630431   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.630439   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.630444   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.633591   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.634036   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.634054   29617 pod_ready.go:82] duration metric: took 399.790817ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.634064   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.830520   29617 request.go:632] Waited for 196.385742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830591   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830596   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.830603   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.830607   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.833974   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.030999   29617 request.go:632] Waited for 196.369359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031062   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031068   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.031075   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.031079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.034522   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.035045   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.035060   29617 pod_ready.go:82] duration metric: took 400.990689ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.035069   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.230101   29617 request.go:632] Waited for 194.964535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230173   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230179   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.230187   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.230191   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.233153   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:18.430174   29617 request.go:632] Waited for 196.304225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430252   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430258   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.430265   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.430271   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.433684   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.434857   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.434876   29617 pod_ready.go:82] duration metric: took 399.800525ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.434886   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.630997   29617 request.go:632] Waited for 196.051862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631067   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631072   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.631079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.631090   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.634569   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.830555   29617 request.go:632] Waited for 195.378028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830645   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830652   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.830659   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.830665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.834017   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.834881   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.834901   29617 pod_ready.go:82] duration metric: took 400.009355ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.834913   29617 pod_ready.go:39] duration metric: took 3.201246724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:18.834925   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:18:18.834977   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:18:18.851851   29617 api_server.go:72] duration metric: took 24.069111498s to wait for apiserver process to appear ...
	I1011 21:18:18.851878   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:18:18.851897   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:18:18.856543   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:18:18.856610   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:18:18.856615   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.856622   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.856626   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.857613   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:18:18.857701   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:18:18.857721   29617 api_server.go:131] duration metric: took 5.836547ms to wait for apiserver health ...
	I1011 21:18:18.857730   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:18:19.030066   29617 request.go:632] Waited for 172.254223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030130   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030136   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.030143   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.030148   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.034696   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.039508   29617 system_pods.go:59] 17 kube-system pods found
	I1011 21:18:19.039540   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.039546   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.039551   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.039557   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.039561   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.039566   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.039570   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.039579   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.039584   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.039592   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.039597   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.039601   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.039606   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.039612   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.039615   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.039619   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.039622   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.039631   29617 system_pods.go:74] duration metric: took 181.896084ms to wait for pod list to return data ...
	I1011 21:18:19.039640   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:18:19.230981   29617 request.go:632] Waited for 191.269571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231051   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231057   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.231064   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.231067   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.235209   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.235407   29617 default_sa.go:45] found service account: "default"
	I1011 21:18:19.235421   29617 default_sa.go:55] duration metric: took 195.775642ms for default service account to be created ...
	I1011 21:18:19.235428   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:18:19.430605   29617 request.go:632] Waited for 195.123077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430704   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430710   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.430718   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.435793   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:18:19.439894   29617 system_pods.go:86] 17 kube-system pods found
	I1011 21:18:19.439921   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.439929   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.439935   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.439942   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.439947   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.439953   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.439959   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.439965   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.439972   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.439980   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.439986   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.439995   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.440002   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.440010   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.440016   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.440020   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.440025   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.440033   29617 system_pods.go:126] duration metric: took 204.599583ms to wait for k8s-apps to be running ...
	I1011 21:18:19.440045   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:18:19.440094   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:19.455815   29617 system_svc.go:56] duration metric: took 15.763998ms WaitForService to wait for kubelet
	I1011 21:18:19.455841   29617 kubeadm.go:582] duration metric: took 24.673107672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:18:19.455860   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:18:19.630302   29617 request.go:632] Waited for 174.358774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630357   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630364   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.630372   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.630379   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.634356   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:19.635316   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635343   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635358   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635363   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635371   29617 node_conditions.go:105] duration metric: took 179.50548ms to run NodePressure ...
	I1011 21:18:19.635384   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:18:19.635415   29617 start.go:255] writing updated cluster config ...
	I1011 21:18:19.637553   29617 out.go:201] 
	I1011 21:18:19.638933   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:19.639018   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.640415   29617 out.go:177] * Starting "ha-610874-m03" control-plane node in "ha-610874" cluster
	I1011 21:18:19.641511   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:18:19.641529   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:18:19.641627   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:18:19.641638   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:18:19.641712   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.641856   29617 start.go:360] acquireMachinesLock for ha-610874-m03: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:18:19.641897   29617 start.go:364] duration metric: took 24.129µs to acquireMachinesLock for "ha-610874-m03"
	I1011 21:18:19.641912   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:19.642000   29617 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1011 21:18:19.643322   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:18:19.643394   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:19.643424   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:19.657905   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I1011 21:18:19.658394   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:19.658868   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:19.658887   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:19.659186   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:19.659360   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:19.659497   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:19.659661   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:18:19.659689   29617 client.go:168] LocalClient.Create starting
	I1011 21:18:19.659716   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:18:19.659744   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659756   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659802   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:18:19.659820   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659830   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659844   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:18:19.659851   29617 main.go:141] libmachine: (ha-610874-m03) Calling .PreCreateCheck
	I1011 21:18:19.659994   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:19.660351   29617 main.go:141] libmachine: Creating machine...
	I1011 21:18:19.660362   29617 main.go:141] libmachine: (ha-610874-m03) Calling .Create
	I1011 21:18:19.660504   29617 main.go:141] libmachine: (ha-610874-m03) Creating KVM machine...
	I1011 21:18:19.661678   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing default KVM network
	I1011 21:18:19.661785   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing private KVM network mk-ha-610874
	I1011 21:18:19.661907   29617 main.go:141] libmachine: (ha-610874-m03) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.661930   29617 main.go:141] libmachine: (ha-610874-m03) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:18:19.662023   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.661913   30793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.662086   29617 main.go:141] libmachine: (ha-610874-m03) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:18:19.893907   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.893764   30793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa...
	I1011 21:18:19.985249   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985139   30793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk...
	I1011 21:18:19.985285   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing magic tar header
	I1011 21:18:19.985300   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing SSH key tar header
	I1011 21:18:19.985311   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985257   30793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.985329   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03
	I1011 21:18:19.985350   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 (perms=drwx------)
	I1011 21:18:19.985373   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:18:19.985396   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.985411   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:18:19.985426   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:18:19.985434   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:18:19.985440   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:18:19.985456   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:18:19.985468   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:18:19.985478   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:18:19.985499   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:18:19.985509   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:19.985516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home
	I1011 21:18:19.985526   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Skipping /home - not owner
	I1011 21:18:19.986460   29617 main.go:141] libmachine: (ha-610874-m03) define libvirt domain using xml: 
	I1011 21:18:19.986487   29617 main.go:141] libmachine: (ha-610874-m03) <domain type='kvm'>
	I1011 21:18:19.986497   29617 main.go:141] libmachine: (ha-610874-m03)   <name>ha-610874-m03</name>
	I1011 21:18:19.986505   29617 main.go:141] libmachine: (ha-610874-m03)   <memory unit='MiB'>2200</memory>
	I1011 21:18:19.986513   29617 main.go:141] libmachine: (ha-610874-m03)   <vcpu>2</vcpu>
	I1011 21:18:19.986528   29617 main.go:141] libmachine: (ha-610874-m03)   <features>
	I1011 21:18:19.986539   29617 main.go:141] libmachine: (ha-610874-m03)     <acpi/>
	I1011 21:18:19.986547   29617 main.go:141] libmachine: (ha-610874-m03)     <apic/>
	I1011 21:18:19.986559   29617 main.go:141] libmachine: (ha-610874-m03)     <pae/>
	I1011 21:18:19.986567   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.986578   29617 main.go:141] libmachine: (ha-610874-m03)   </features>
	I1011 21:18:19.986587   29617 main.go:141] libmachine: (ha-610874-m03)   <cpu mode='host-passthrough'>
	I1011 21:18:19.986598   29617 main.go:141] libmachine: (ha-610874-m03)   
	I1011 21:18:19.986605   29617 main.go:141] libmachine: (ha-610874-m03)   </cpu>
	I1011 21:18:19.986657   29617 main.go:141] libmachine: (ha-610874-m03)   <os>
	I1011 21:18:19.986683   29617 main.go:141] libmachine: (ha-610874-m03)     <type>hvm</type>
	I1011 21:18:19.986694   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='cdrom'/>
	I1011 21:18:19.986706   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='hd'/>
	I1011 21:18:19.986714   29617 main.go:141] libmachine: (ha-610874-m03)     <bootmenu enable='no'/>
	I1011 21:18:19.986723   29617 main.go:141] libmachine: (ha-610874-m03)   </os>
	I1011 21:18:19.986733   29617 main.go:141] libmachine: (ha-610874-m03)   <devices>
	I1011 21:18:19.986743   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='cdrom'>
	I1011 21:18:19.986759   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/boot2docker.iso'/>
	I1011 21:18:19.986773   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hdc' bus='scsi'/>
	I1011 21:18:19.986784   29617 main.go:141] libmachine: (ha-610874-m03)       <readonly/>
	I1011 21:18:19.986793   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986804   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='disk'>
	I1011 21:18:19.986816   29617 main.go:141] libmachine: (ha-610874-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:18:19.986831   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk'/>
	I1011 21:18:19.986840   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hda' bus='virtio'/>
	I1011 21:18:19.986871   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986898   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986911   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='mk-ha-610874'/>
	I1011 21:18:19.986922   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986933   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986941   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986948   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='default'/>
	I1011 21:18:19.986962   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986972   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986987   29617 main.go:141] libmachine: (ha-610874-m03)     <serial type='pty'>
	I1011 21:18:19.986999   29617 main.go:141] libmachine: (ha-610874-m03)       <target port='0'/>
	I1011 21:18:19.987006   29617 main.go:141] libmachine: (ha-610874-m03)     </serial>
	I1011 21:18:19.987015   29617 main.go:141] libmachine: (ha-610874-m03)     <console type='pty'>
	I1011 21:18:19.987025   29617 main.go:141] libmachine: (ha-610874-m03)       <target type='serial' port='0'/>
	I1011 21:18:19.987033   29617 main.go:141] libmachine: (ha-610874-m03)     </console>
	I1011 21:18:19.987052   29617 main.go:141] libmachine: (ha-610874-m03)     <rng model='virtio'>
	I1011 21:18:19.987060   29617 main.go:141] libmachine: (ha-610874-m03)       <backend model='random'>/dev/random</backend>
	I1011 21:18:19.987068   29617 main.go:141] libmachine: (ha-610874-m03)     </rng>
	I1011 21:18:19.987076   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987087   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987095   29617 main.go:141] libmachine: (ha-610874-m03)   </devices>
	I1011 21:18:19.987107   29617 main.go:141] libmachine: (ha-610874-m03) </domain>
	I1011 21:18:19.987120   29617 main.go:141] libmachine: (ha-610874-m03) 
	I1011 21:18:19.993869   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:ec:a1:8a in network default
	I1011 21:18:19.994634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:19.994661   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring networks are active...
	I1011 21:18:19.995468   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network default is active
	I1011 21:18:19.995798   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network mk-ha-610874 is active
	I1011 21:18:19.996173   29617 main.go:141] libmachine: (ha-610874-m03) Getting domain xml...
	I1011 21:18:19.996928   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:21.254226   29617 main.go:141] libmachine: (ha-610874-m03) Waiting to get IP...
	I1011 21:18:21.254939   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.255287   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.255333   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.255277   30793 retry.go:31] will retry after 299.921958ms: waiting for machine to come up
	I1011 21:18:21.557116   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.557606   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.557634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.557554   30793 retry.go:31] will retry after 286.000289ms: waiting for machine to come up
	I1011 21:18:21.844948   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.845467   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.845490   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.845417   30793 retry.go:31] will retry after 387.119662ms: waiting for machine to come up
	I1011 21:18:22.233861   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.234347   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.234371   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.234316   30793 retry.go:31] will retry after 432.218769ms: waiting for machine to come up
	I1011 21:18:22.667570   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.668013   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.668044   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.667960   30793 retry.go:31] will retry after 681.692732ms: waiting for machine to come up
	I1011 21:18:23.350671   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:23.351087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:23.351114   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:23.351059   30793 retry.go:31] will retry after 838.189989ms: waiting for machine to come up
	I1011 21:18:24.191008   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:24.191479   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:24.191510   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:24.191434   30793 retry.go:31] will retry after 815.751815ms: waiting for machine to come up
	I1011 21:18:25.008738   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:25.009063   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:25.009087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:25.009033   30793 retry.go:31] will retry after 1.238801147s: waiting for machine to come up
	I1011 21:18:26.249732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:26.250130   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:26.250160   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:26.250077   30793 retry.go:31] will retry after 1.384996284s: waiting for machine to come up
	I1011 21:18:27.636107   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:27.636581   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:27.636616   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:27.636560   30793 retry.go:31] will retry after 2.228451179s: waiting for machine to come up
	I1011 21:18:29.866214   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:29.866564   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:29.866592   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:29.866517   30793 retry.go:31] will retry after 2.670642081s: waiting for machine to come up
	I1011 21:18:32.539631   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:32.539928   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:32.539955   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:32.539912   30793 retry.go:31] will retry after 2.348031686s: waiting for machine to come up
	I1011 21:18:34.889816   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:34.890238   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:34.890284   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:34.890163   30793 retry.go:31] will retry after 4.066011924s: waiting for machine to come up
	I1011 21:18:38.960327   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:38.960729   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:38.960754   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:38.960678   30793 retry.go:31] will retry after 5.543915191s: waiting for machine to come up
	I1011 21:18:44.509752   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510179   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has current primary IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510202   29617 main.go:141] libmachine: (ha-610874-m03) Found IP for machine: 192.168.39.222
	I1011 21:18:44.510223   29617 main.go:141] libmachine: (ha-610874-m03) Reserving static IP address...
	I1011 21:18:44.510657   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find host DHCP lease matching {name: "ha-610874-m03", mac: "52:54:00:54:11:ff", ip: "192.168.39.222"} in network mk-ha-610874
	I1011 21:18:44.581123   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Getting to WaitForSSH function...
	I1011 21:18:44.581152   29617 main.go:141] libmachine: (ha-610874-m03) Reserved static IP address: 192.168.39.222
	I1011 21:18:44.581189   29617 main.go:141] libmachine: (ha-610874-m03) Waiting for SSH to be available...
	I1011 21:18:44.584495   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585006   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.585034   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585216   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH client type: external
	I1011 21:18:44.585245   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa (-rw-------)
	I1011 21:18:44.585269   29617 main.go:141] libmachine: (ha-610874-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:18:44.585288   29617 main.go:141] libmachine: (ha-610874-m03) DBG | About to run SSH command:
	I1011 21:18:44.585303   29617 main.go:141] libmachine: (ha-610874-m03) DBG | exit 0
	I1011 21:18:44.714704   29617 main.go:141] libmachine: (ha-610874-m03) DBG | SSH cmd err, output: <nil>: 
	I1011 21:18:44.714970   29617 main.go:141] libmachine: (ha-610874-m03) KVM machine creation complete!
	I1011 21:18:44.715289   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:44.715822   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.715996   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.716157   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:18:44.716172   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetState
	I1011 21:18:44.717356   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:18:44.717371   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:18:44.717376   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:18:44.717382   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.719703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.719994   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.720030   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.720182   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.720357   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720507   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.720910   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.721104   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.721116   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:18:44.833939   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:44.833957   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:18:44.833964   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.836658   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837043   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.837069   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837281   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.837454   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837581   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837720   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.837855   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.838048   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.838063   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:18:44.951348   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:18:44.951417   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:18:44.951426   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:18:44.951433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951662   29617 buildroot.go:166] provisioning hostname "ha-610874-m03"
	I1011 21:18:44.951688   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951865   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.954732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955115   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.955139   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955310   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.955477   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955769   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.955914   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.956070   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.956081   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m03 && echo "ha-610874-m03" | sudo tee /etc/hostname
	I1011 21:18:45.085832   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m03
	
	I1011 21:18:45.085866   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.088705   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089140   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.089165   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089355   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.089596   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089767   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089921   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.090058   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.090210   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.090224   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:18:45.213456   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:45.213485   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:18:45.213503   29617 buildroot.go:174] setting up certificates
	I1011 21:18:45.213511   29617 provision.go:84] configureAuth start
	I1011 21:18:45.213520   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:45.213850   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.216516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.216909   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.216945   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.217058   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.219374   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219692   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.219725   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219870   29617 provision.go:143] copyHostCerts
	I1011 21:18:45.219895   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.219927   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:18:45.219936   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.220002   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:18:45.220073   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220091   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:18:45.220098   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220120   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:18:45.220162   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220179   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:18:45.220186   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220212   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:18:45.220261   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m03 san=[127.0.0.1 192.168.39.222 ha-610874-m03 localhost minikube]
	I1011 21:18:45.381567   29617 provision.go:177] copyRemoteCerts
	I1011 21:18:45.381648   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:18:45.381676   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.384744   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385058   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.385090   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385241   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.385433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.385594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.385733   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.474156   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:18:45.474223   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:18:45.499839   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:18:45.499913   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:18:45.523935   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:18:45.524000   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:18:45.546732   29617 provision.go:87] duration metric: took 333.208457ms to configureAuth
	I1011 21:18:45.546761   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:18:45.546986   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:45.547077   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.549423   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549746   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.549774   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549963   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.550145   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550309   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550436   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.550559   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.550750   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.550765   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:18:45.793129   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:18:45.793158   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:18:45.793166   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetURL
	I1011 21:18:45.794426   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using libvirt version 6000000
	I1011 21:18:45.796703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797072   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.797104   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797300   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:18:45.797313   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:18:45.797320   29617 client.go:171] duration metric: took 26.137622442s to LocalClient.Create
	I1011 21:18:45.797348   29617 start.go:167] duration metric: took 26.137680612s to libmachine.API.Create "ha-610874"
	I1011 21:18:45.797358   29617 start.go:293] postStartSetup for "ha-610874-m03" (driver="kvm2")
	I1011 21:18:45.797373   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:18:45.797391   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:45.797597   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:18:45.797632   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.799512   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799830   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.799859   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799989   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.800143   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.800296   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.800459   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.889596   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:18:45.893814   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:18:45.893840   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:18:45.893920   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:18:45.893992   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:18:45.894000   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:18:45.894078   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:18:45.903909   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:45.928066   29617 start.go:296] duration metric: took 130.695494ms for postStartSetup
	I1011 21:18:45.928125   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:45.928694   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.931370   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.931736   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.931757   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.932008   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:45.932227   29617 start.go:128] duration metric: took 26.290217466s to createHost
	I1011 21:18:45.932255   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.934599   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.934957   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.934980   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.935141   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.935302   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935450   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.935755   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.935906   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.935915   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:18:46.051363   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681526.030608830
	
	I1011 21:18:46.051382   29617 fix.go:216] guest clock: 1728681526.030608830
	I1011 21:18:46.051389   29617 fix.go:229] Guest: 2024-10-11 21:18:46.03060883 +0000 UTC Remote: 2024-10-11 21:18:45.932240932 +0000 UTC m=+149.654084325 (delta=98.367898ms)
	I1011 21:18:46.051403   29617 fix.go:200] guest clock delta is within tolerance: 98.367898ms
	I1011 21:18:46.051408   29617 start.go:83] releasing machines lock for "ha-610874-m03", held for 26.409503393s
	I1011 21:18:46.051425   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.051638   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:46.054103   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.054465   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.054484   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.056759   29617 out.go:177] * Found network options:
	I1011 21:18:46.058108   29617 out.go:177]   - NO_PROXY=192.168.39.10,192.168.39.11
	W1011 21:18:46.059377   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.059397   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.059412   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.059861   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060012   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060103   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:18:46.060140   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	W1011 21:18:46.060197   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.060218   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.060273   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:18:46.060291   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:46.062781   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063134   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063156   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063177   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063332   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063533   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.063672   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.063695   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063722   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063809   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063917   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.063937   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.064070   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.064193   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.315238   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:18:46.321537   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:18:46.321622   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:18:46.338777   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:18:46.338801   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:18:46.338861   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:18:46.354279   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:18:46.367905   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:18:46.367951   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:18:46.382395   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:18:46.395784   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:18:46.527698   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:18:46.689393   29617 docker.go:233] disabling docker service ...
	I1011 21:18:46.689462   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:18:46.704203   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:18:46.717422   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:18:46.835539   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:18:46.954100   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:18:46.969007   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:18:46.988391   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:18:46.988466   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:46.998736   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:18:46.998798   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.011000   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.020896   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.032139   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:18:47.042674   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.053148   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.070001   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.079898   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:18:47.089404   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:18:47.089464   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:18:47.101955   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:18:47.111372   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:47.225475   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:18:47.314226   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:18:47.314298   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:18:47.318974   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:18:47.319034   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:18:47.322683   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:18:47.363256   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:18:47.363346   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.390105   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.420312   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:18:47.421976   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:18:47.423450   29617 out.go:177]   - env NO_PROXY=192.168.39.10,192.168.39.11
	I1011 21:18:47.424609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:47.427015   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427408   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:47.427435   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427580   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:18:47.432290   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:47.445118   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:18:47.445341   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:47.445588   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.445623   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.460772   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I1011 21:18:47.461253   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.461758   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.461778   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.462071   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.462258   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:18:47.463800   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:47.464063   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.464094   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.478835   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1011 21:18:47.479190   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.479632   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.479653   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.479922   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.480090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:47.480267   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.222
	I1011 21:18:47.480276   29617 certs.go:194] generating shared ca certs ...
	I1011 21:18:47.480289   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.480440   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:18:47.480492   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:18:47.480504   29617 certs.go:256] generating profile certs ...
	I1011 21:18:47.480599   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:18:47.480632   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda
	I1011 21:18:47.480651   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.222 192.168.39.254]
	I1011 21:18:47.766344   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda ...
	I1011 21:18:47.766372   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda: {Name:mk781938e611c805d4d3614e2a3753b43a334879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766558   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda ...
	I1011 21:18:47.766576   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda: {Name:mk730a6176bc0314778375ee5435bf733e13e8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766701   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:18:47.766854   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:18:47.767020   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:18:47.767039   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:18:47.767069   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:18:47.767088   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:18:47.767105   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:18:47.767122   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:18:47.767138   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:18:47.767155   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:18:47.790727   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:18:47.790840   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:18:47.790890   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:18:47.790900   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:18:47.790934   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:18:47.790968   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:18:47.791002   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:18:47.791046   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:47.791074   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:18:47.791090   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:47.791103   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:18:47.791139   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:47.794048   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794490   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:47.794521   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794666   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:47.794865   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:47.795021   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:47.795166   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:47.874924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:18:47.879896   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:18:47.890508   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:18:47.894884   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:18:47.906444   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:18:47.911071   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:18:47.924640   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:18:47.929130   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:18:47.939543   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:18:47.943420   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:18:47.952418   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:18:47.956156   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:18:47.965542   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:18:47.990672   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:18:48.018655   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:18:48.046638   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:18:48.075087   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1011 21:18:48.099261   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1011 21:18:48.125316   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:18:48.150810   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:18:48.176240   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:18:48.202437   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:18:48.228304   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:18:48.250733   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:18:48.267330   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:18:48.284282   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:18:48.300414   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:18:48.317312   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:18:48.334266   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:18:48.350540   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:18:48.366454   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:18:48.371903   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:18:48.382259   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386521   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386558   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.392096   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:18:48.402476   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:18:48.414951   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420157   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420212   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.426147   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:18:48.437228   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:18:48.447706   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452447   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452490   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.457944   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:18:48.469558   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:18:48.473684   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:18:48.473727   29617 kubeadm.go:934] updating node {m03 192.168.39.222 8443 v1.31.1 crio true true} ...
	I1011 21:18:48.473800   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:18:48.473821   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:18:48.473848   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:18:48.489435   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:18:48.489512   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:18:48.489571   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.499111   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:18:48.499166   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:18:48.509200   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509211   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1011 21:18:48.509233   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509250   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509288   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509215   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:48.517849   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:18:48.517877   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:18:48.530466   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.530534   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:18:48.530551   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:18:48.530575   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.584347   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:18:48.584388   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:18:49.359545   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:18:49.369067   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1011 21:18:49.386375   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:18:49.402697   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:18:49.419546   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:18:49.424269   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:49.437035   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:49.561710   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:18:49.579907   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:49.580262   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:49.580306   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:49.596329   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I1011 21:18:49.596782   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:49.597244   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:49.597267   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:49.597574   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:49.597761   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:49.597902   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:18:49.598045   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:18:49.598061   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:49.601098   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601584   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:49.601613   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601735   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:49.601902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:49.602044   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:49.602182   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:49.765636   29617 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:49.765692   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I1011 21:19:12.027662   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (22.261919257s)
	I1011 21:19:12.027723   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:19:12.601287   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m03 minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:19:12.730357   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:19:12.852046   29617 start.go:319] duration metric: took 23.254138834s to joinCluster
	I1011 21:19:12.852173   29617 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:19:12.852553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:19:12.853928   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:19:12.855524   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:19:13.141318   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:19:13.175499   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:19:13.175739   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:19:13.175813   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:19:13.176040   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:13.176203   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.176216   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.176230   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.176236   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.180062   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:13.676530   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.676550   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.676559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.676563   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.680629   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.176763   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.176790   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.176802   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.176813   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.181595   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.676942   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.676962   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.676971   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.676974   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.680092   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.177198   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.177232   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.177243   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.177251   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.181013   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.181507   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:15.676949   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.676975   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.676985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.676991   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.680404   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.176381   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.176401   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.176411   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.176416   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.179611   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.676230   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.676253   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.676264   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.676269   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.679007   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.176965   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.176991   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.177003   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.177010   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.179578   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.677212   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.677239   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.677250   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.677257   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.680848   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:17.681529   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:18.176617   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.176642   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.176652   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.176657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.180501   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:18.676324   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.676344   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.676352   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.676356   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.680172   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:19.176785   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.176805   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.176813   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.176817   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.180917   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:19.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.676229   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.676239   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.676247   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.679537   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:20.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.176578   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.176586   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.176590   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.180852   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:20.181655   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:20.676981   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.677001   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.677010   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.677013   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.680773   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.176665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.176687   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.176695   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.176698   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.180326   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.677105   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.677131   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.677143   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.677150   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.680523   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:22.176275   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.176296   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.176305   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.176311   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.180665   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:22.181892   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:22.677209   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.677234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.677254   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.677260   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.680867   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.177040   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.177059   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.177067   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.177072   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.180354   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.676494   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.676523   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.676533   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.676539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.679890   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.177143   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.177165   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.177172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.177178   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.181118   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.182010   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:24.677149   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.677167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.677176   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.677179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.681310   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.176839   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.176861   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.176869   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.176875   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.181361   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.676226   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.676235   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.676238   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.679734   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.176896   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.176927   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.176938   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.176942   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.180665   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.676529   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.676556   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.676567   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.676574   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.679852   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.680538   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:27.176980   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.177000   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.177008   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.177011   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.180641   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:27.676837   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.676865   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.676876   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.676883   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.680097   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.177112   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.177134   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.177145   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.177152   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.180461   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.676318   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.676339   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.676347   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.676351   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.680275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.680843   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:29.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.176576   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.176584   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.176589   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.180006   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:29.676572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.676591   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.676601   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.676608   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.679885   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.176623   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.176647   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.176655   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.176660   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.180360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.676414   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.676442   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.676454   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.676462   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.679795   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.176596   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.176622   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.176632   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.176638   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.180174   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.180775   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:31.676625   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.676645   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.676653   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.676657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.679755   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.176832   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.176853   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.176861   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.176866   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.180709   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.676943   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.676966   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.676975   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.676979   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.680453   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.176289   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.176309   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.176317   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.176323   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179239   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:33.179746   29617 node_ready.go:49] node "ha-610874-m03" has status "Ready":"True"
	I1011 21:19:33.179763   29617 node_ready.go:38] duration metric: took 20.003708199s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:33.179771   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:33.179838   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:33.179846   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.179852   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179856   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.189958   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.199406   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.199502   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:19:33.199514   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.199523   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.199531   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.209887   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.210687   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.210702   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.210713   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.210717   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.217280   29617 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1011 21:19:33.217765   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.217784   29617 pod_ready.go:82] duration metric: took 18.353705ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217795   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217867   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:19:33.217877   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.217887   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.217892   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.223080   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:33.223812   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.223824   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.223831   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.223835   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.230872   29617 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1011 21:19:33.231311   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.231329   29617 pod_ready.go:82] duration metric: took 13.526998ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231340   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231407   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:19:33.231416   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.231425   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.231433   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.241511   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.242134   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.242152   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.242161   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.242167   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.246996   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.247556   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.247579   29617 pod_ready.go:82] duration metric: took 16.22432ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247588   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247649   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:19:33.247658   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.247665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.247671   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.251040   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.251793   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:33.251812   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.251824   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.251833   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.256535   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.256972   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.256988   29617 pod_ready.go:82] duration metric: took 9.394627ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.256997   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.377135   29617 request.go:632] Waited for 120.080186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377222   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.377244   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.377255   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.380444   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.576460   29617 request.go:632] Waited for 195.298391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576523   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576531   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.576540   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.576546   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.579942   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.580389   29617 pod_ready.go:93] pod "etcd-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.580410   29617 pod_ready.go:82] duration metric: took 323.407782ms for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.580426   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.776719   29617 request.go:632] Waited for 196.227093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776796   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776801   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.776812   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.776819   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.780183   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.977331   29617 request.go:632] Waited for 196.373167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977390   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.977408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.977414   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.980667   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.981324   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.981341   29617 pod_ready.go:82] duration metric: took 400.908426ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.981356   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.176801   29617 request.go:632] Waited for 195.389419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176872   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176878   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.176886   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.176893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.180626   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.376945   29617 request.go:632] Waited for 195.362412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377024   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377032   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.377039   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.377045   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.380705   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.381593   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.381610   29617 pod_ready.go:82] duration metric: took 400.248016ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.381621   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.576685   29617 request.go:632] Waited for 195.00587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576774   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576785   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.576796   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.576812   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.580220   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.776845   29617 request.go:632] Waited for 195.742935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776934   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776946   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.776957   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.776965   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.781975   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:34.782910   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.782934   29617 pod_ready.go:82] duration metric: took 401.305343ms for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.782947   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.976878   29617 request.go:632] Waited for 193.849735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976930   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976935   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.976942   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.976951   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.980959   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.176307   29617 request.go:632] Waited for 194.592291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176377   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176382   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.176391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.176396   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.180046   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.180744   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.180763   29617 pod_ready.go:82] duration metric: took 397.808243ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.180772   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.376823   29617 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376892   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376904   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.376914   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.376920   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.380896   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.577025   29617 request.go:632] Waited for 195.339459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577098   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577106   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.577113   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.577121   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.580479   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.581020   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.581044   29617 pod_ready.go:82] duration metric: took 400.264515ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.581060   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.777008   29617 request.go:632] Waited for 195.878722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777069   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777082   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.777104   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.777112   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.780597   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.976851   29617 request.go:632] Waited for 195.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976920   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976925   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.976934   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.976956   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.980563   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.981007   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.981026   29617 pod_ready.go:82] duration metric: took 399.955573ms for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.981036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.177077   29617 request.go:632] Waited for 195.967969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177157   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177162   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.177169   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.177174   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.181463   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:36.376692   29617 request.go:632] Waited for 194.268817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376745   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376750   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.376757   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.376762   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.379384   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.379856   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.379878   29617 pod_ready.go:82] duration metric: took 398.835564ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.379892   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.577313   29617 request.go:632] Waited for 197.342873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577431   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577448   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.577456   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.577460   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.580412   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.776616   29617 request.go:632] Waited for 195.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776706   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776717   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.776728   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.776737   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.779960   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:36.780383   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.780400   29617 pod_ready.go:82] duration metric: took 400.499984ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.780412   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.976358   29617 request.go:632] Waited for 195.870601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976432   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976449   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.976465   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.976472   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.979995   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.177111   29617 request.go:632] Waited for 196.357808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177162   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.177174   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.177179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.180267   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.181011   29617 pod_ready.go:93] pod "kube-proxy-cwzw4" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.181027   29617 pod_ready.go:82] duration metric: took 400.605186ms for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.181036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.377210   29617 request.go:632] Waited for 196.081343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377264   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377271   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.377281   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.377290   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.380963   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.577326   29617 request.go:632] Waited for 195.76133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577389   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.577404   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.577408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.580712   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.581178   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.581195   29617 pod_ready.go:82] duration metric: took 400.154079ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.581207   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.776648   29617 request.go:632] Waited for 195.355762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776752   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776766   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.776778   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.776782   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.779689   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:37.976673   29617 request.go:632] Waited for 196.375961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976747   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976758   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.976880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.976898   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.980426   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.981073   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.981096   29617 pod_ready.go:82] duration metric: took 399.882141ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.981108   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.177223   29617 request.go:632] Waited for 196.014293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177283   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177288   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.177296   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.177301   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.181281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.376306   29617 request.go:632] Waited for 194.28038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376394   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376403   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.376412   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.376419   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.379547   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.380029   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:38.380048   29617 pod_ready.go:82] duration metric: took 398.929633ms for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.380058   29617 pod_ready.go:39] duration metric: took 5.200277623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:38.380084   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:19:38.380134   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:19:38.400400   29617 api_server.go:72] duration metric: took 25.548169639s to wait for apiserver process to appear ...
	I1011 21:19:38.400421   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:19:38.400455   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:19:38.404896   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:19:38.404960   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:19:38.404973   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.404983   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.404989   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.405751   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:19:38.405814   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:19:38.405829   29617 api_server.go:131] duration metric: took 5.403218ms to wait for apiserver health ...
	I1011 21:19:38.405839   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:19:38.577234   29617 request.go:632] Waited for 171.320057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577302   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577307   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.577315   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.577319   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.583229   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.592399   29617 system_pods.go:59] 24 kube-system pods found
	I1011 21:19:38.592431   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.592436   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.592439   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.592442   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.592445   29617 system_pods.go:61] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.592448   29617 system_pods.go:61] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.592452   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.592455   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.592458   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.592461   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.592465   29617 system_pods.go:61] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.592468   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.592474   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.592477   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.592480   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.592484   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.592488   29617 system_pods.go:61] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.592493   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.592496   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.592499   29617 system_pods.go:61] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.592502   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.592511   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.592517   29617 system_pods.go:61] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.592521   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.592525   29617 system_pods.go:74] duration metric: took 186.682269ms to wait for pod list to return data ...
	I1011 21:19:38.592532   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:19:38.776788   29617 request.go:632] Waited for 184.17903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776850   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776857   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.776867   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.776874   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.780634   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.780764   29617 default_sa.go:45] found service account: "default"
	I1011 21:19:38.780782   29617 default_sa.go:55] duration metric: took 188.241369ms for default service account to be created ...
	I1011 21:19:38.780791   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:19:38.977229   29617 request.go:632] Waited for 196.374035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977314   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977326   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.977333   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.977339   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.983305   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.990701   29617 system_pods.go:86] 24 kube-system pods found
	I1011 21:19:38.990734   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.990743   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.990750   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.990756   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.990762   29617 system_pods.go:89] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.990769   29617 system_pods.go:89] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.990775   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.990782   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.990790   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.990800   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.990808   29617 system_pods.go:89] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.990818   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.990826   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.990835   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.990842   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.990849   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.990856   29617 system_pods.go:89] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.990866   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.990873   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.990880   29617 system_pods.go:89] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.990889   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.990896   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.990903   29617 system_pods.go:89] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.990910   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.990922   29617 system_pods.go:126] duration metric: took 210.12433ms to wait for k8s-apps to be running ...
	I1011 21:19:38.990936   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:19:38.991000   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:19:39.006368   29617 system_svc.go:56] duration metric: took 15.405995ms WaitForService to wait for kubelet
	I1011 21:19:39.006398   29617 kubeadm.go:582] duration metric: took 26.154169399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:19:39.006432   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:19:39.177139   29617 request.go:632] Waited for 170.58768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177204   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177210   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:39.177218   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:39.177226   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:39.180762   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:39.182158   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182186   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182210   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182214   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182219   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182222   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182225   29617 node_conditions.go:105] duration metric: took 175.788668ms to run NodePressure ...
	I1011 21:19:39.182235   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:19:39.182261   29617 start.go:255] writing updated cluster config ...
	I1011 21:19:39.182594   29617 ssh_runner.go:195] Run: rm -f paused
	I1011 21:19:39.238354   29617 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:19:39.241534   29617 out.go:177] * Done! kubectl is now configured to use "ha-610874" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.675735421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17e60db9-f787-4f95-8b3f-18bf0f6aa920 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.676170551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17e60db9-f787-4f95-8b3f-18bf0f6aa920 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.713730056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=987f4751-075e-4005-8241-8dd3f1501f36 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.713813399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=987f4751-075e-4005-8241-8dd3f1501f36 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.714935028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=285c8d18-0df3-4982-9bfb-604828bacde5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.715439041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681805715417217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=285c8d18-0df3-4982-9bfb-604828bacde5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.716269415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=633fd312-226e-4151-bf47-19912a40f3a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.716345159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=633fd312-226e-4151-bf47-19912a40f3a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.716572076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=633fd312-226e-4151-bf47-19912a40f3a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.759136512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12418cd1-481b-4e7b-9755-e662c9d733f6 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.759278178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12418cd1-481b-4e7b-9755-e662c9d733f6 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.760499319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e9c21f9-4e03-4da5-ac89-bcb418676202 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.760948857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681805760927888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e9c21f9-4e03-4da5-ac89-bcb418676202 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.761695743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30da07ab-d659-4323-9e56-db90d8a55266 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.761769801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30da07ab-d659-4323-9e56-db90d8a55266 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.762005975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30da07ab-d659-4323-9e56-db90d8a55266 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.800056799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec0ff38d-f498-4ff9-949c-401c94872eb3 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.800140803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec0ff38d-f498-4ff9-949c-401c94872eb3 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.801468581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0aa2ef9b-36ef-4667-82f0-489d6a8efc54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.801909390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681805801889372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0aa2ef9b-36ef-4667-82f0-489d6a8efc54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.802720071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d34628ea-491b-4464-8c4f-7bba7f93f6b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.802794595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d34628ea-491b-4464-8c4f-7bba7f93f6b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.803058941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d34628ea-491b-4464-8c4f-7bba7f93f6b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.832812186Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=4233e1f9-1eb7-4c3b-b920-86bf0b3a809c name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:25 ha-610874 crio[662]: time="2024-10-11 21:23:25.832882226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4233e1f9-1eb7-4c3b-b920-86bf0b3a809c name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a12e9c8cc5fc5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3d6c8146ac279       busybox-7dff88458-wdkxg
	add7da026dcc4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8079f4949344c       coredns-7c65d6cfc9-xdhdb
	f6f7910716598       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bb1b1e2f66116       coredns-7c65d6cfc9-bhkxl
	01564ba5bc1e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   5b0253d201393       storage-provisioner
	9d5b2015aad60       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   bc055170688e1       kindnet-pd7rn
	4af1bc183cfbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   9bb0d73fd8a6d       kube-proxy-4tqhn
	7009deb3ff5ef       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   343b700a511ad       kube-vip-ha-610874
	1bb0907534c8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9a96e5f0cd28a       kube-controller-manager-ha-610874
	093fe14b91d96       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   089d2c0589273       kube-scheduler-ha-610874
	b6a994e3f4bd9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6fbc98773bd42       kube-apiserver-ha-610874
	1cf13112be94f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   65e184a932364       etcd-ha-610874
	
	
	==> coredns [add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6] <==
	[INFO] 10.244.1.2:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143766s
	[INFO] 10.244.1.2:38119 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142587s
	[INFO] 10.244.1.2:40246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.002793445s
	[INFO] 10.244.1.2:46273 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000207574s
	[INFO] 10.244.0.4:51515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133463s
	[INFO] 10.244.0.4:34555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773084s
	[INFO] 10.244.0.4:56190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010851s
	[INFO] 10.244.0.4:35324 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114943s
	[INFO] 10.244.0.4:37261 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075619s
	[INFO] 10.244.2.2:33936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100657s
	[INFO] 10.244.2.2:47182 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000246779s
	[INFO] 10.244.1.2:44485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167961s
	[INFO] 10.244.1.2:46483 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141019s
	[INFO] 10.244.1.2:55464 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121351s
	[INFO] 10.244.0.4:47194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117616s
	[INFO] 10.244.0.4:49523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148468s
	[INFO] 10.244.0.4:45932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127987s
	[INFO] 10.244.0.4:49317 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075167s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169352s
	[INFO] 10.244.2.2:33809 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014751s
	[INFO] 10.244.2.2:44485 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176967s
	[INFO] 10.244.1.2:48359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011299s
	[INFO] 10.244.0.4:56947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140437s
	[INFO] 10.244.0.4:57754 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075899s
	[INFO] 10.244.0.4:59528 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091718s
	
	
	==> coredns [f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb] <==
	[INFO] 127.0.0.1:48153 - 48750 "HINFO IN 7219889624523006915.8528053042981959638. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015325438s
	[INFO] 10.244.2.2:47536 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.017241259s
	[INFO] 10.244.2.2:38591 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013641236s
	[INFO] 10.244.1.2:49949 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001322762s
	[INFO] 10.244.1.2:43849 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00009337s
	[INFO] 10.244.0.4:40246 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000070768s
	[INFO] 10.244.0.4:45808 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00140468s
	[INFO] 10.244.2.2:36598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219913s
	[INFO] 10.244.2.2:59970 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164371s
	[INFO] 10.244.2.2:54785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130909s
	[INFO] 10.244.1.2:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791262s
	[INFO] 10.244.1.2:49139 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158826s
	[INFO] 10.244.1.2:59870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00130207s
	[INFO] 10.244.1.2:48112 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127361s
	[INFO] 10.244.0.4:37981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152222s
	[INFO] 10.244.0.4:40975 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001145115s
	[INFO] 10.244.0.4:46746 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060695s
	[INFO] 10.244.2.2:60221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111051s
	[INFO] 10.244.2.2:45949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000966s
	[INFO] 10.244.1.2:51845 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131185s
	[INFO] 10.244.2.2:49925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140614s
	[INFO] 10.244.1.2:40749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139491s
	[INFO] 10.244.1.2:40058 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192557s
	[INFO] 10.244.1.2:36253 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154213s
	[INFO] 10.244.0.4:54354 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127201s
	
	
	==> describe nodes <==
	Name:               ha-610874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:16:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    ha-610874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cfe54b8903d4e3899113202463cdd3d
	  System UUID:                0cfe54b8-903d-4e38-9911-3202463cdd3d
	  Boot ID:                    afa53331-2d72-4daf-aead-d3b59f60fb23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wdkxg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-7c65d6cfc9-bhkxl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-xdhdb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-610874                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-pd7rn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-610874             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-610874    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-4tqhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-610874             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-610874                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-610874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-610874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-610874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  NodeReady                6m3s   kubelet          Node ha-610874 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  RegisteredNode           4m8s   node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	
	
	Name:               ha-610874-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:17:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:20:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-610874-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5e48fde498443da85ce03c51747b961
	  System UUID:                e5e48fde-4984-43da-85ce-03c51747b961
	  Boot ID:                    bf2f6504-4406-4797-b6e1-dc754be8ce6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pwg8s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-610874-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-xs5m6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-610874-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-610874-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-4bj7p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-610874-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-610874-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-610874-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-610874-m02 status is now: NodeNotReady
	
	
	Name:               ha-610874-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-610874-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1063a3d54d5d40c88a61db94380d3423
	  System UUID:                1063a3d5-4d5d-40c8-8a61-db94380d3423
	  Boot ID:                    ced9dc07-ccd1-4190-aae0-50f9a8bdae06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4sstr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-610874-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m15s
	  kube-system                 kindnet-2c774                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-610874-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-controller-manager-ha-610874-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-proxy-cwzw4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-610874-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-vip-ha-610874-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m17s                  cidrAllocator    Node ha-610874-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node ha-610874-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	
	
	Name:               ha-610874-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_20_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-610874-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75d61525a70843b49a5efd4786a05869
	  System UUID:                75d61525-a708-43b4-9a5e-fd4786a05869
	  Boot ID:                    172ace10-e670-4373-a755-bb93871c28da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7dn76       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-vrd24    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-610874-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m7s                 cidrAllocator    Node ha-610874-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-610874-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct11 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.855992] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543327] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581790] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.580104] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056339] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.193419] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.137869] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293941] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.956728] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.562630] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.064485] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.508464] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.090437] kauditd_printk_skb: 79 callbacks suppressed
	[Oct11 21:17] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.436722] kauditd_printk_skb: 29 callbacks suppressed
	[ +46.213407] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a] <==
	{"level":"warn","ts":"2024-10-11T21:23:26.057817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.058883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.063378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.075726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.084306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.093023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.100677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.105118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.109727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.118490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.124478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.127440Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.134370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.137418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.140419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.145316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.151038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.156995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.160145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.163004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.166485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.172430Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.178726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.180696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:26.241481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:23:26 up 7 min,  0 users,  load average: 0.36, 0.39, 0.20
	Linux ha-610874 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952] <==
	I1011 21:22:53.015599       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:03.016986       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:03.017143       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:03.017517       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:03.017599       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:03.017887       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:03.017926       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:03.018170       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:03.018292       1 main.go:300] handling current node
	I1011 21:23:13.008357       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:13.008403       1 main.go:300] handling current node
	I1011 21:23:13.008468       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:13.008474       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:13.008844       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:13.008922       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:13.009419       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:13.009448       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:23.017976       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:23.018143       1 main.go:300] handling current node
	I1011 21:23:23.018234       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:23.018259       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:23.018517       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:23.018551       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:23.018673       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:23.018695       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948] <==
	I1011 21:17:03.544827       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1011 21:17:03.633951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1011 21:17:53.070315       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.070829       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 84.644µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:17:53.072106       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.073324       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.074623       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.578549ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:10.074019       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.449µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:19:10.074013       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9bd8f8e8-8e91-4067-a12f-1ea2d8bd41c6"
	E1011 21:19:10.074068       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.809µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:45.881753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47690: use of closed network connection
	E1011 21:19:46.062184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47710: use of closed network connection
	E1011 21:19:46.253652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47728: use of closed network connection
	E1011 21:19:46.438494       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47750: use of closed network connection
	E1011 21:19:46.637537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47770: use of closed network connection
	E1011 21:19:46.815140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45184: use of closed network connection
	E1011 21:19:47.002661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45216: use of closed network connection
	E1011 21:19:47.179398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45236: use of closed network connection
	E1011 21:19:47.346528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45250: use of closed network connection
	E1011 21:19:47.638405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45264: use of closed network connection
	E1011 21:19:47.808669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45288: use of closed network connection
	E1011 21:19:47.977304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45304: use of closed network connection
	E1011 21:19:48.152762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45326: use of closed network connection
	E1011 21:19:48.324710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45346: use of closed network connection
	E1011 21:19:48.491718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45354: use of closed network connection
	
	
	==> kube-controller-manager [1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865] <==
	I1011 21:20:18.968008       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-610874-m04" podCIDRs=["10.244.3.0/24"]
	I1011 21:20:18.968119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.968257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.984966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:19.260924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.121280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.397093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.070457       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-610874-m04"
	I1011 21:20:23.072402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.132945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.420908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.568334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:29.120840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562762       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:20:39.580852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:40.377354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:49.215156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:21:38.097956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:21:38.098503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.132013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.234358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.800775ms"
	I1011 21:21:38.234458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.4µs"
	I1011 21:21:38.464262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:43.340055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	
	
	==> kube-proxy [4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 21:17:05.854510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 21:17:05.879022       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E1011 21:17:05.879501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 21:17:05.914134       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 21:17:05.914253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 21:17:05.914286       1 server_linux.go:169] "Using iptables Proxier"
	I1011 21:17:05.916891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 21:17:05.917757       1 server.go:483] "Version info" version="v1.31.1"
	I1011 21:17:05.917796       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:17:05.920479       1 config.go:199] "Starting service config controller"
	I1011 21:17:05.920740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 21:17:05.920939       1 config.go:105] "Starting endpoint slice config controller"
	I1011 21:17:05.920964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 21:17:05.921847       1 config.go:328] "Starting node config controller"
	I1011 21:17:05.921877       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 21:17:06.021605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 21:17:06.021672       1 shared_informer.go:320] Caches are synced for service config
	I1011 21:17:06.021955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94] <==
	W1011 21:16:56.914961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:56.914997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:56.955611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 21:16:56.955698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.100673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 21:16:57.100737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.117148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.117326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.263820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 21:16:57.264353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.296892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 21:16:57.297090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.359800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.360057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.555273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 21:16:57.555402       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 21:17:00.497419       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1011 21:20:19.054608       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7dn76" node="ha-610874-m04"
	E1011 21:20:19.055446       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-7dn76"
	E1011 21:20:19.188470       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dz2h8" node="ha-610874-m04"
	E1011 21:20:19.188552       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-dz2h8"
	E1011 21:20:19.193309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	E1011 21:20:19.195518       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f3a80da1-771c-458b-85ce-bff2b7759d1e(kube-system/kube-proxy-ht4ns) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ht4ns"
	E1011 21:20:19.195828       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" pod="kube-system/kube-proxy-ht4ns"
	I1011 21:20:19.196036       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	
	
	==> kubelet <==
	Oct 11 21:21:58 ha-610874 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:21:58 ha-610874 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:21:59 ha-610874 kubelet[1312]: E1011 21:21:59.036447    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681719036062418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:21:59 ha-610874 kubelet[1312]: E1011 21:21:59.036488    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681719036062418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038549    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038630    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040811    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040841    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.042974    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.043019    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044063    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044089    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045695    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045734    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:58 ha-610874 kubelet[1312]: E1011 21:22:58.943175    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.046933    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.047037    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049554    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049631    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.053671    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.054088    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-610874 -n ha-610874
helpers_test.go:261: (dbg) Run:  kubectl --context ha-610874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr: (4.036011179s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-610874 -n ha-610874
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 logs -n 25: (1.304728458s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m03_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m04 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp testdata/cp-test.txt                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m03 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-610874 node stop m02 -v=7                                                     | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-610874 node start m02 -v=7                                                    | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:16:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:16:16.315983   29617 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:16:16.316246   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316256   29617 out.go:358] Setting ErrFile to fd 2...
	I1011 21:16:16.316260   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316440   29617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:16:16.316986   29617 out.go:352] Setting JSON to false
	I1011 21:16:16.317794   29617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3521,"bootTime":1728677855,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:16:16.317891   29617 start.go:139] virtualization: kvm guest
	I1011 21:16:16.320541   29617 out.go:177] * [ha-610874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:16:16.321962   29617 notify.go:220] Checking for updates...
	I1011 21:16:16.321994   29617 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:16:16.323197   29617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:16:16.324431   29617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:16:16.325803   29617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.326998   29617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:16:16.328308   29617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:16:16.329813   29617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:16:16.364781   29617 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 21:16:16.366005   29617 start.go:297] selected driver: kvm2
	I1011 21:16:16.366018   29617 start.go:901] validating driver "kvm2" against <nil>
	I1011 21:16:16.366031   29617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:16:16.366752   29617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.366844   29617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:16:16.382125   29617 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:16:16.382207   29617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 21:16:16.382499   29617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:16:16.382537   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:16.382594   29617 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1011 21:16:16.382605   29617 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 21:16:16.382687   29617 start.go:340] cluster config:
	{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:16.382807   29617 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.384631   29617 out.go:177] * Starting "ha-610874" primary control-plane node in "ha-610874" cluster
	I1011 21:16:16.385929   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:16.385976   29617 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:16:16.385989   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:16:16.386070   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:16:16.386083   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:16:16.386381   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:16.386407   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json: {Name:mk126d2587705783f49cefd5532c6478d010ac07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:16.386555   29617 start.go:360] acquireMachinesLock for ha-610874: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:16:16.386593   29617 start.go:364] duration metric: took 23.105µs to acquireMachinesLock for "ha-610874"
	I1011 21:16:16.386631   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:16:16.386695   29617 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 21:16:16.388125   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:16:16.388266   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:16:16.388308   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:16:16.402198   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I1011 21:16:16.402701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:16:16.403193   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:16:16.403238   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:16:16.403629   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:16:16.403831   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:16.403987   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:16.404130   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:16:16.404153   29617 client.go:168] LocalClient.Create starting
	I1011 21:16:16.404179   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:16:16.404207   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404220   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404273   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:16:16.404296   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404309   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404323   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:16:16.404331   29617 main.go:141] libmachine: (ha-610874) Calling .PreCreateCheck
	I1011 21:16:16.404634   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:16.404967   29617 main.go:141] libmachine: Creating machine...
	I1011 21:16:16.404978   29617 main.go:141] libmachine: (ha-610874) Calling .Create
	I1011 21:16:16.405091   29617 main.go:141] libmachine: (ha-610874) Creating KVM machine...
	I1011 21:16:16.406548   29617 main.go:141] libmachine: (ha-610874) DBG | found existing default KVM network
	I1011 21:16:16.407330   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.407180   29640 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1011 21:16:16.407350   29617 main.go:141] libmachine: (ha-610874) DBG | created network xml: 
	I1011 21:16:16.407362   29617 main.go:141] libmachine: (ha-610874) DBG | <network>
	I1011 21:16:16.407369   29617 main.go:141] libmachine: (ha-610874) DBG |   <name>mk-ha-610874</name>
	I1011 21:16:16.407378   29617 main.go:141] libmachine: (ha-610874) DBG |   <dns enable='no'/>
	I1011 21:16:16.407386   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407396   29617 main.go:141] libmachine: (ha-610874) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1011 21:16:16.407401   29617 main.go:141] libmachine: (ha-610874) DBG |     <dhcp>
	I1011 21:16:16.407430   29617 main.go:141] libmachine: (ha-610874) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1011 21:16:16.407460   29617 main.go:141] libmachine: (ha-610874) DBG |     </dhcp>
	I1011 21:16:16.407476   29617 main.go:141] libmachine: (ha-610874) DBG |   </ip>
	I1011 21:16:16.407485   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407492   29617 main.go:141] libmachine: (ha-610874) DBG | </network>
	I1011 21:16:16.407498   29617 main.go:141] libmachine: (ha-610874) DBG | 
	I1011 21:16:16.412623   29617 main.go:141] libmachine: (ha-610874) DBG | trying to create private KVM network mk-ha-610874 192.168.39.0/24...
	I1011 21:16:16.475097   29617 main.go:141] libmachine: (ha-610874) DBG | private KVM network mk-ha-610874 192.168.39.0/24 created
	I1011 21:16:16.475123   29617 main.go:141] libmachine: (ha-610874) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.475147   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.475097   29640 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.475159   29617 main.go:141] libmachine: (ha-610874) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:16:16.475241   29617 main.go:141] libmachine: (ha-610874) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:16:16.729125   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.729005   29640 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa...
	I1011 21:16:16.910019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.909910   29640 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk...
	I1011 21:16:16.910047   29617 main.go:141] libmachine: (ha-610874) DBG | Writing magic tar header
	I1011 21:16:16.910056   29617 main.go:141] libmachine: (ha-610874) DBG | Writing SSH key tar header
	I1011 21:16:16.910063   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.910020   29640 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.910136   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874
	I1011 21:16:16.910176   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 (perms=drwx------)
	I1011 21:16:16.910191   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:16:16.910200   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:16:16.910207   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.910225   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:16:16.910242   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:16:16.910260   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:16:16.910277   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:16:16.910286   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:16:16.910293   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:16:16.910306   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:16.910328   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:16:16.910345   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home
	I1011 21:16:16.910356   29617 main.go:141] libmachine: (ha-610874) DBG | Skipping /home - not owner
	I1011 21:16:16.911372   29617 main.go:141] libmachine: (ha-610874) define libvirt domain using xml: 
	I1011 21:16:16.911391   29617 main.go:141] libmachine: (ha-610874) <domain type='kvm'>
	I1011 21:16:16.911398   29617 main.go:141] libmachine: (ha-610874)   <name>ha-610874</name>
	I1011 21:16:16.911402   29617 main.go:141] libmachine: (ha-610874)   <memory unit='MiB'>2200</memory>
	I1011 21:16:16.911407   29617 main.go:141] libmachine: (ha-610874)   <vcpu>2</vcpu>
	I1011 21:16:16.911412   29617 main.go:141] libmachine: (ha-610874)   <features>
	I1011 21:16:16.911418   29617 main.go:141] libmachine: (ha-610874)     <acpi/>
	I1011 21:16:16.911425   29617 main.go:141] libmachine: (ha-610874)     <apic/>
	I1011 21:16:16.911430   29617 main.go:141] libmachine: (ha-610874)     <pae/>
	I1011 21:16:16.911442   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911451   29617 main.go:141] libmachine: (ha-610874)   </features>
	I1011 21:16:16.911459   29617 main.go:141] libmachine: (ha-610874)   <cpu mode='host-passthrough'>
	I1011 21:16:16.911467   29617 main.go:141] libmachine: (ha-610874)   
	I1011 21:16:16.911473   29617 main.go:141] libmachine: (ha-610874)   </cpu>
	I1011 21:16:16.911479   29617 main.go:141] libmachine: (ha-610874)   <os>
	I1011 21:16:16.911484   29617 main.go:141] libmachine: (ha-610874)     <type>hvm</type>
	I1011 21:16:16.911489   29617 main.go:141] libmachine: (ha-610874)     <boot dev='cdrom'/>
	I1011 21:16:16.911492   29617 main.go:141] libmachine: (ha-610874)     <boot dev='hd'/>
	I1011 21:16:16.911498   29617 main.go:141] libmachine: (ha-610874)     <bootmenu enable='no'/>
	I1011 21:16:16.911504   29617 main.go:141] libmachine: (ha-610874)   </os>
	I1011 21:16:16.911510   29617 main.go:141] libmachine: (ha-610874)   <devices>
	I1011 21:16:16.911516   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='cdrom'>
	I1011 21:16:16.911532   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/boot2docker.iso'/>
	I1011 21:16:16.911547   29617 main.go:141] libmachine: (ha-610874)       <target dev='hdc' bus='scsi'/>
	I1011 21:16:16.911568   29617 main.go:141] libmachine: (ha-610874)       <readonly/>
	I1011 21:16:16.911586   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911596   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='disk'>
	I1011 21:16:16.911605   29617 main.go:141] libmachine: (ha-610874)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:16:16.911637   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk'/>
	I1011 21:16:16.911655   29617 main.go:141] libmachine: (ha-610874)       <target dev='hda' bus='virtio'/>
	I1011 21:16:16.911674   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911692   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911700   29617 main.go:141] libmachine: (ha-610874)       <source network='mk-ha-610874'/>
	I1011 21:16:16.911705   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911709   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911713   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911719   29617 main.go:141] libmachine: (ha-610874)       <source network='default'/>
	I1011 21:16:16.911726   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911730   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911736   29617 main.go:141] libmachine: (ha-610874)     <serial type='pty'>
	I1011 21:16:16.911741   29617 main.go:141] libmachine: (ha-610874)       <target port='0'/>
	I1011 21:16:16.911745   29617 main.go:141] libmachine: (ha-610874)     </serial>
	I1011 21:16:16.911751   29617 main.go:141] libmachine: (ha-610874)     <console type='pty'>
	I1011 21:16:16.911757   29617 main.go:141] libmachine: (ha-610874)       <target type='serial' port='0'/>
	I1011 21:16:16.911762   29617 main.go:141] libmachine: (ha-610874)     </console>
	I1011 21:16:16.911771   29617 main.go:141] libmachine: (ha-610874)     <rng model='virtio'>
	I1011 21:16:16.911795   29617 main.go:141] libmachine: (ha-610874)       <backend model='random'>/dev/random</backend>
	I1011 21:16:16.911810   29617 main.go:141] libmachine: (ha-610874)     </rng>
	I1011 21:16:16.911818   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911827   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911835   29617 main.go:141] libmachine: (ha-610874)   </devices>
	I1011 21:16:16.911844   29617 main.go:141] libmachine: (ha-610874) </domain>
	I1011 21:16:16.911853   29617 main.go:141] libmachine: (ha-610874) 
	I1011 21:16:16.916111   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:3e:bc:a1 in network default
	I1011 21:16:16.916699   29617 main.go:141] libmachine: (ha-610874) Ensuring networks are active...
	I1011 21:16:16.916720   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:16.917266   29617 main.go:141] libmachine: (ha-610874) Ensuring network default is active
	I1011 21:16:16.917528   29617 main.go:141] libmachine: (ha-610874) Ensuring network mk-ha-610874 is active
	I1011 21:16:16.918196   29617 main.go:141] libmachine: (ha-610874) Getting domain xml...
	I1011 21:16:16.918917   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:18.090043   29617 main.go:141] libmachine: (ha-610874) Waiting to get IP...
	I1011 21:16:18.090745   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.091141   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.091169   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.091121   29640 retry.go:31] will retry after 201.066044ms: waiting for machine to come up
	I1011 21:16:18.293473   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.293939   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.293961   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.293905   29640 retry.go:31] will retry after 378.868503ms: waiting for machine to come up
	I1011 21:16:18.674665   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.675080   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.675111   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.675034   29640 retry.go:31] will retry after 485.059913ms: waiting for machine to come up
	I1011 21:16:19.161402   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.161817   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.161841   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.161779   29640 retry.go:31] will retry after 597.34397ms: waiting for machine to come up
	I1011 21:16:19.760468   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.761020   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.761049   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.760968   29640 retry.go:31] will retry after 563.860814ms: waiting for machine to come up
	I1011 21:16:20.326631   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:20.326999   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:20.327019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:20.326975   29640 retry.go:31] will retry after 723.522472ms: waiting for machine to come up
	I1011 21:16:21.051775   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:21.052216   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:21.052252   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:21.052167   29640 retry.go:31] will retry after 1.08960891s: waiting for machine to come up
	I1011 21:16:22.142962   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:22.143401   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:22.143426   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:22.143368   29640 retry.go:31] will retry after 897.228253ms: waiting for machine to come up
	I1011 21:16:23.042418   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:23.042804   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:23.042830   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:23.042766   29640 retry.go:31] will retry after 1.598924345s: waiting for machine to come up
	I1011 21:16:24.643409   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:24.643801   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:24.643824   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:24.643752   29640 retry.go:31] will retry after 2.213754576s: waiting for machine to come up
	I1011 21:16:26.858883   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:26.859262   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:26.859288   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:26.859206   29640 retry.go:31] will retry after 2.657896821s: waiting for machine to come up
	I1011 21:16:29.518223   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:29.518660   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:29.518685   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:29.518604   29640 retry.go:31] will retry after 3.090933093s: waiting for machine to come up
	I1011 21:16:32.611083   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:32.611504   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:32.611526   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:32.611439   29640 retry.go:31] will retry after 4.256728144s: waiting for machine to come up
	I1011 21:16:36.869470   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.869869   29617 main.go:141] libmachine: (ha-610874) Found IP for machine: 192.168.39.10
	I1011 21:16:36.869889   29617 main.go:141] libmachine: (ha-610874) Reserving static IP address...
	I1011 21:16:36.869901   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has current primary IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.870189   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "ha-610874", mac: "52:54:00:5f:c7:da", ip: "192.168.39.10"} in network mk-ha-610874
	I1011 21:16:36.939387   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:36.939416   29617 main.go:141] libmachine: (ha-610874) Reserved static IP address: 192.168.39.10
	I1011 21:16:36.939452   29617 main.go:141] libmachine: (ha-610874) Waiting for SSH to be available...
	I1011 21:16:36.941715   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.941968   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874
	I1011 21:16:36.941981   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:5f:c7:da
	I1011 21:16:36.942096   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:36.942140   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:36.942184   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:36.942200   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:36.942220   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:36.945904   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:16:36.945918   29617 main.go:141] libmachine: (ha-610874) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:16:36.945924   29617 main.go:141] libmachine: (ha-610874) DBG | command : exit 0
	I1011 21:16:36.945937   29617 main.go:141] libmachine: (ha-610874) DBG | err     : exit status 255
	I1011 21:16:36.945943   29617 main.go:141] libmachine: (ha-610874) DBG | output  : 
	I1011 21:16:39.948099   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:39.950401   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950756   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:39.950783   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950892   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:39.950914   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:39.950953   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:39.950970   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:39.950994   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:40.078944   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: <nil>: 
	I1011 21:16:40.079215   29617 main.go:141] libmachine: (ha-610874) KVM machine creation complete!
	I1011 21:16:40.079553   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:40.080090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080284   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080465   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:16:40.080487   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:16:40.081981   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:16:40.081998   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:16:40.082006   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:16:40.082015   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.084298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084628   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.084651   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084818   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.084959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085094   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085224   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.085388   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.085639   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.085653   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:16:40.198146   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.198167   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:16:40.198175   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.200910   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201288   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.201309   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201507   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.201664   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.201836   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.202076   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.202254   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.202419   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.202429   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:16:40.320067   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:16:40.320126   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:16:40.320134   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:16:40.320143   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320383   29617 buildroot.go:166] provisioning hostname "ha-610874"
	I1011 21:16:40.320406   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320566   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.322841   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323123   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.323151   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323298   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.323462   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323604   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323710   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.323847   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.324007   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.324018   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874 && echo "ha-610874" | sudo tee /etc/hostname
	I1011 21:16:40.453038   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:16:40.453062   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.455945   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456318   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.456341   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456518   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.456721   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456849   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.457152   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.457380   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.457403   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:16:40.579667   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.579694   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:16:40.579712   29617 buildroot.go:174] setting up certificates
	I1011 21:16:40.579722   29617 provision.go:84] configureAuth start
	I1011 21:16:40.579730   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.579972   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:40.582609   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.582944   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.582970   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.583046   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.585314   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585630   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.585652   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585815   29617 provision.go:143] copyHostCerts
	I1011 21:16:40.585854   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585886   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:16:40.585905   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585976   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:16:40.586075   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586099   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:16:40.586109   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586148   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:16:40.586259   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586280   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:16:40.586286   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586312   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:16:40.586375   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874 san=[127.0.0.1 192.168.39.10 ha-610874 localhost minikube]
	I1011 21:16:40.739496   29617 provision.go:177] copyRemoteCerts
	I1011 21:16:40.739549   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:16:40.739572   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.742211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742512   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.742540   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742690   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.742858   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.743050   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.743333   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:40.830053   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:16:40.830129   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:16:40.854808   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:16:40.854871   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:16:40.878779   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:16:40.878844   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1011 21:16:40.903681   29617 provision.go:87] duration metric: took 323.94786ms to configureAuth
	I1011 21:16:40.903706   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:16:40.903876   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:16:40.903945   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.906420   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906781   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.906802   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906980   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.907177   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907312   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907417   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.907537   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.907709   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.907729   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:16:41.149826   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:16:41.149854   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:16:41.149864   29617 main.go:141] libmachine: (ha-610874) Calling .GetURL
	I1011 21:16:41.151110   29617 main.go:141] libmachine: (ha-610874) DBG | Using libvirt version 6000000
	I1011 21:16:41.153298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153626   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.153645   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153813   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:16:41.153832   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:16:41.153840   29617 client.go:171] duration metric: took 24.749677896s to LocalClient.Create
	I1011 21:16:41.153864   29617 start.go:167] duration metric: took 24.749734503s to libmachine.API.Create "ha-610874"
	I1011 21:16:41.153877   29617 start.go:293] postStartSetup for "ha-610874" (driver="kvm2")
	I1011 21:16:41.153888   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:16:41.153907   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.154134   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:16:41.154156   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.156353   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156731   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.156764   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.157060   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.157197   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.157377   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.245691   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:16:41.249882   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:16:41.249905   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:16:41.249959   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:16:41.250032   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:16:41.250041   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:16:41.250126   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:16:41.259595   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:41.283193   29617 start.go:296] duration metric: took 129.282074ms for postStartSetup
	I1011 21:16:41.283237   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:41.283845   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.286641   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.286965   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.286993   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.287545   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:41.287766   29617 start.go:128] duration metric: took 24.901059572s to createHost
	I1011 21:16:41.287798   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.290002   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290466   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.290494   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290571   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.290756   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.290937   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.291088   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.291234   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:41.291438   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:41.291450   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:16:41.403429   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681401.368525171
	
	I1011 21:16:41.403454   29617 fix.go:216] guest clock: 1728681401.368525171
	I1011 21:16:41.403464   29617 fix.go:229] Guest: 2024-10-11 21:16:41.368525171 +0000 UTC Remote: 2024-10-11 21:16:41.287784391 +0000 UTC m=+25.009627787 (delta=80.74078ms)
	I1011 21:16:41.403482   29617 fix.go:200] guest clock delta is within tolerance: 80.74078ms
	I1011 21:16:41.403487   29617 start.go:83] releasing machines lock for "ha-610874", held for 25.016883267s
	I1011 21:16:41.403504   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.403754   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.406243   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406536   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.406580   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406719   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407201   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407373   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407483   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:16:41.407533   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.407566   29617 ssh_runner.go:195] Run: cat /version.json
	I1011 21:16:41.407594   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.409924   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410186   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410232   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410307   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.410474   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.410626   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.410667   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410689   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410822   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.410885   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.411000   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.411159   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.411313   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.492040   29617 ssh_runner.go:195] Run: systemctl --version
	I1011 21:16:41.526227   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:16:41.684068   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:16:41.690188   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:16:41.690243   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:16:41.709475   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:16:41.709500   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:16:41.709563   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:16:41.725364   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:16:41.739326   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:16:41.739404   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:16:41.753640   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:16:41.767723   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:16:41.878060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:16:42.036051   29617 docker.go:233] disabling docker service ...
	I1011 21:16:42.036136   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:16:42.051987   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:16:42.065946   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:16:42.197199   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:16:42.333061   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:16:42.346878   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:16:42.365538   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:16:42.365592   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.375884   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:16:42.375943   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.386250   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.396765   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.407109   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:16:42.417549   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.427975   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.446147   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.456868   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:16:42.466165   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:16:42.466232   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:16:42.479799   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:16:42.489557   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:42.623905   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:16:42.716796   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:16:42.716871   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:16:42.721858   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:16:42.721918   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:16:42.725704   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:16:42.764981   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:16:42.765051   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.793072   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.822676   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:16:42.824024   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:42.826801   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827112   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:42.827137   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827350   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:16:42.831498   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:42.845346   29617 kubeadm.go:883] updating cluster {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:16:42.845519   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:42.845589   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:42.883957   29617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 21:16:42.884036   29617 ssh_runner.go:195] Run: which lz4
	I1011 21:16:42.888030   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1011 21:16:42.888109   29617 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 21:16:42.892241   29617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 21:16:42.892274   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 21:16:44.230363   29617 crio.go:462] duration metric: took 1.342272134s to copy over tarball
	I1011 21:16:44.230455   29617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 21:16:46.214291   29617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.983794178s)
	I1011 21:16:46.214315   29617 crio.go:469] duration metric: took 1.983922074s to extract the tarball
	I1011 21:16:46.214323   29617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 21:16:46.250833   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:46.298082   29617 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:16:46.298105   29617 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:16:46.298113   29617 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1011 21:16:46.298286   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:16:46.298384   29617 ssh_runner.go:195] Run: crio config
	I1011 21:16:46.343467   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:46.343493   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:46.343504   29617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:16:46.343528   29617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-610874 NodeName:ha-610874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:16:46.343703   29617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-610874"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:16:46.343730   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:16:46.343782   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:16:46.359672   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:16:46.359783   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:16:46.359850   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:16:46.370362   29617 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:16:46.370421   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1011 21:16:46.380573   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1011 21:16:46.396912   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:16:46.413759   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1011 21:16:46.430823   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1011 21:16:46.447531   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:16:46.451423   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:46.463809   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:46.584169   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:16:46.602286   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.10
	I1011 21:16:46.602304   29617 certs.go:194] generating shared ca certs ...
	I1011 21:16:46.602322   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.602467   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:16:46.602520   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:16:46.602533   29617 certs.go:256] generating profile certs ...
	I1011 21:16:46.602592   29617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:16:46.602638   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt with IP's: []
	I1011 21:16:46.782362   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt ...
	I1011 21:16:46.782395   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt: {Name:mk3593f4e91ffc0372a05bdad3e927ec316a91aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782596   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key ...
	I1011 21:16:46.782611   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key: {Name:mk9677876d62491747fdfd0e3f8d4776645d1f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782738   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7
	I1011 21:16:46.782756   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.254]
	I1011 21:16:47.380528   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 ...
	I1011 21:16:47.380560   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7: {Name:mk19e9d91179b46f9b03d4d9246179f41c3327ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380745   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 ...
	I1011 21:16:47.380776   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7: {Name:mk7fedd6c046987d5af851e2eed75ec367a33eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380872   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:16:47.380985   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:16:47.381067   29617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:16:47.381087   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt with IP's: []
	I1011 21:16:47.453906   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt ...
	I1011 21:16:47.453937   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt: {Name:mka90ed4c47ce0265f1b9da519124bd4fc73bbae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454114   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key ...
	I1011 21:16:47.454128   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key: {Name:mk47103fb5abe47f635456ba2a4ed9a69f678b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454230   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:16:47.454250   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:16:47.454266   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:16:47.454284   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:16:47.454303   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:16:47.454319   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:16:47.454335   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:16:47.454354   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:16:47.454417   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:16:47.454461   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:16:47.454473   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:16:47.454508   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:16:47.454543   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:16:47.454573   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:16:47.454648   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:47.454696   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.454719   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.454738   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.455273   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:16:47.481574   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:16:47.514683   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:16:47.538141   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:16:47.561021   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 21:16:47.585590   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:16:47.608816   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:16:47.632949   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:16:47.656849   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:16:47.680043   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:16:47.703417   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:16:47.726027   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:16:47.747378   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:16:47.754019   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:16:47.765407   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770565   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770631   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.776851   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:16:47.788126   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:16:47.799052   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803877   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803931   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.810054   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:16:47.821548   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:16:47.832817   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837775   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837829   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.843943   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:16:47.855398   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:16:47.859877   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:16:47.859928   29617 kubeadm.go:392] StartCluster: {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:47.860006   29617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:16:47.860081   29617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:16:47.903170   29617 cri.go:89] found id: ""
	I1011 21:16:47.903248   29617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 21:16:47.914400   29617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 21:16:47.924721   29617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 21:16:47.935673   29617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 21:16:47.935695   29617 kubeadm.go:157] found existing configuration files:
	
	I1011 21:16:47.935740   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 21:16:47.945454   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 21:16:47.945524   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 21:16:47.955440   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 21:16:47.964875   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 21:16:47.964944   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 21:16:47.974788   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.984258   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 21:16:47.984307   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.993726   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 21:16:48.002584   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 21:16:48.002650   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 21:16:48.012268   29617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 21:16:48.121155   29617 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 21:16:48.121351   29617 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 21:16:48.250203   29617 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 21:16:48.250314   29617 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 21:16:48.250452   29617 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 21:16:48.261245   29617 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 21:16:48.410718   29617 out.go:235]   - Generating certificates and keys ...
	I1011 21:16:48.410844   29617 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 21:16:48.410931   29617 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 21:16:48.542325   29617 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 21:16:48.608543   29617 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 21:16:48.797753   29617 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 21:16:48.873089   29617 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 21:16:49.070716   29617 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 21:16:49.071155   29617 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.372270   29617 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 21:16:49.372512   29617 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.423801   29617 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 21:16:49.655483   29617 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 21:16:49.724172   29617 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 21:16:49.724487   29617 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 21:16:50.017890   29617 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 21:16:50.285355   29617 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 21:16:50.392641   29617 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 21:16:50.748011   29617 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 21:16:50.984708   29617 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 21:16:50.985344   29617 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 21:16:50.988659   29617 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 21:16:50.990557   29617 out.go:235]   - Booting up control plane ...
	I1011 21:16:50.990675   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 21:16:50.990768   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 21:16:50.992112   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 21:16:51.010698   29617 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 21:16:51.019483   29617 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 21:16:51.019560   29617 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 21:16:51.165086   29617 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 21:16:51.165244   29617 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 21:16:51.666035   29617 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.408194ms
	I1011 21:16:51.666178   29617 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 21:16:58.166573   29617 kubeadm.go:310] [api-check] The API server is healthy after 6.502304408s
	I1011 21:16:58.179631   29617 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 21:16:58.195028   29617 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 21:16:58.220647   29617 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 21:16:58.220871   29617 kubeadm.go:310] [mark-control-plane] Marking the node ha-610874 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 21:16:58.236113   29617 kubeadm.go:310] [bootstrap-token] Using token: j1o64v.rjb74fe9bovjls5f
	I1011 21:16:58.237740   29617 out.go:235]   - Configuring RBAC rules ...
	I1011 21:16:58.237875   29617 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 21:16:58.245441   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 21:16:58.254162   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 21:16:58.259203   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 21:16:58.274345   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 21:16:58.278840   29617 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 21:16:58.578576   29617 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 21:16:59.008419   29617 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 21:16:59.573438   29617 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 21:16:59.574394   29617 kubeadm.go:310] 
	I1011 21:16:59.574519   29617 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 21:16:59.574537   29617 kubeadm.go:310] 
	I1011 21:16:59.574645   29617 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 21:16:59.574659   29617 kubeadm.go:310] 
	I1011 21:16:59.574685   29617 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 21:16:59.574753   29617 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 21:16:59.574825   29617 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 21:16:59.574836   29617 kubeadm.go:310] 
	I1011 21:16:59.574917   29617 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 21:16:59.574925   29617 kubeadm.go:310] 
	I1011 21:16:59.574988   29617 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 21:16:59.574998   29617 kubeadm.go:310] 
	I1011 21:16:59.575073   29617 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 21:16:59.575188   29617 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 21:16:59.575286   29617 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 21:16:59.575300   29617 kubeadm.go:310] 
	I1011 21:16:59.575406   29617 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 21:16:59.575519   29617 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 21:16:59.575533   29617 kubeadm.go:310] 
	I1011 21:16:59.575645   29617 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.575774   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 21:16:59.575812   29617 kubeadm.go:310] 	--control-plane 
	I1011 21:16:59.575825   29617 kubeadm.go:310] 
	I1011 21:16:59.575924   29617 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 21:16:59.575932   29617 kubeadm.go:310] 
	I1011 21:16:59.576044   29617 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.576197   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 21:16:59.576985   29617 kubeadm.go:310] W1011 21:16:48.086167     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577396   29617 kubeadm.go:310] W1011 21:16:48.087109     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577500   29617 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 21:16:59.577512   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:59.577520   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:59.579873   29617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 21:16:59.581130   29617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 21:16:59.586500   29617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 21:16:59.586517   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 21:16:59.606073   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 21:16:59.978632   29617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 21:16:59.978713   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:16:59.978732   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874 minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=true
	I1011 21:17:00.174706   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:00.174708   29617 ops.go:34] apiserver oom_adj: -16
	I1011 21:17:00.675693   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.174849   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.675518   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.174832   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.674899   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.174904   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.254520   29617 kubeadm.go:1113] duration metric: took 3.275873473s to wait for elevateKubeSystemPrivileges
	I1011 21:17:03.254557   29617 kubeadm.go:394] duration metric: took 15.394633584s to StartCluster
	I1011 21:17:03.254574   29617 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.254667   29617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.255426   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.255658   29617 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:03.255670   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 21:17:03.255683   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:17:03.255698   29617 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 21:17:03.255784   29617 addons.go:69] Setting storage-provisioner=true in profile "ha-610874"
	I1011 21:17:03.255803   29617 addons.go:234] Setting addon storage-provisioner=true in "ha-610874"
	I1011 21:17:03.255807   29617 addons.go:69] Setting default-storageclass=true in profile "ha-610874"
	I1011 21:17:03.255835   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.255840   29617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-610874"
	I1011 21:17:03.255868   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:03.256287   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256300   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.256367   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.271522   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I1011 21:17:03.271689   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I1011 21:17:03.272056   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272154   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272592   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272609   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272755   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272784   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272931   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273093   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.273112   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273524   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.273562   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.275146   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.275352   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 21:17:03.275763   29617 cert_rotation.go:140] Starting client certificate rotation controller
	I1011 21:17:03.275942   29617 addons.go:234] Setting addon default-storageclass=true in "ha-610874"
	I1011 21:17:03.275971   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.276303   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.276340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.288268   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1011 21:17:03.288701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.289186   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.289212   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.289573   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.289758   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.290984   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1011 21:17:03.291476   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.291798   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.292035   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.292052   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.292353   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.292786   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.292827   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.293969   29617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 21:17:03.295203   29617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.295223   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 21:17:03.295241   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.298221   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298669   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.298695   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298893   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.299039   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.299248   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.299371   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.307894   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33173
	I1011 21:17:03.308319   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.308780   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.308794   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.309115   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.309363   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.311112   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.311334   29617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.311352   29617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 21:17:03.311368   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.314487   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.314914   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.314938   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.315112   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.315274   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.315432   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.315580   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.390668   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 21:17:03.477039   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.523146   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.861068   29617 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1011 21:17:04.076843   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076867   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.076939   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076960   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077121   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077129   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077152   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077162   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077170   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077198   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077208   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077216   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077228   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077423   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077435   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077497   29617 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 21:17:04.077512   29617 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 21:17:04.077537   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077557   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077562   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077613   29617 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1011 21:17:04.077629   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.077640   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.077652   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.088649   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:04.089177   29617 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1011 21:17:04.089196   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.089204   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.089222   29617 round_trippers.go:473]     Content-Type: application/json
	I1011 21:17:04.089229   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.091300   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:04.091435   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.091450   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.091679   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.091716   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.091728   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.093543   29617 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1011 21:17:04.094783   29617 addons.go:510] duration metric: took 839.089678ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1011 21:17:04.094816   29617 start.go:246] waiting for cluster config update ...
	I1011 21:17:04.094834   29617 start.go:255] writing updated cluster config ...
	I1011 21:17:04.096346   29617 out.go:201] 
	I1011 21:17:04.097685   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:04.097746   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.099389   29617 out.go:177] * Starting "ha-610874-m02" control-plane node in "ha-610874" cluster
	I1011 21:17:04.100656   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:17:04.100673   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:17:04.100774   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:17:04.100788   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:17:04.100851   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.100998   29617 start.go:360] acquireMachinesLock for ha-610874-m02: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:17:04.101042   29617 start.go:364] duration metric: took 25.742µs to acquireMachinesLock for "ha-610874-m02"
	I1011 21:17:04.101063   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:04.101132   29617 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1011 21:17:04.102447   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:17:04.102519   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:04.102554   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:04.117018   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I1011 21:17:04.117574   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:04.118020   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:04.118046   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:04.118342   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:04.118495   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:04.118627   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:04.118734   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:17:04.118757   29617 client.go:168] LocalClient.Create starting
	I1011 21:17:04.118782   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:17:04.118814   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118825   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118865   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:17:04.118883   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118895   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118909   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:17:04.118916   29617 main.go:141] libmachine: (ha-610874-m02) Calling .PreCreateCheck
	I1011 21:17:04.119022   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:04.119344   29617 main.go:141] libmachine: Creating machine...
	I1011 21:17:04.119354   29617 main.go:141] libmachine: (ha-610874-m02) Calling .Create
	I1011 21:17:04.119448   29617 main.go:141] libmachine: (ha-610874-m02) Creating KVM machine...
	I1011 21:17:04.120553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing default KVM network
	I1011 21:17:04.120665   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing private KVM network mk-ha-610874
	I1011 21:17:04.120779   29617 main.go:141] libmachine: (ha-610874-m02) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.120796   29617 main.go:141] libmachine: (ha-610874-m02) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:17:04.120855   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.120779   29991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.120961   29617 main.go:141] libmachine: (ha-610874-m02) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:17:04.350121   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.350001   29991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa...
	I1011 21:17:04.441541   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441397   29991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk...
	I1011 21:17:04.441576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing magic tar header
	I1011 21:17:04.441591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing SSH key tar header
	I1011 21:17:04.441603   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441509   29991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.441619   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02
	I1011 21:17:04.441634   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:17:04.441650   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 (perms=drwx------)
	I1011 21:17:04.441661   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.441676   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:17:04.441687   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:17:04.441702   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:17:04.441718   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:17:04.441730   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:17:04.441739   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:17:04.441771   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:04.441793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:17:04.441805   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:17:04.441813   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home
	I1011 21:17:04.441826   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Skipping /home - not owner
	I1011 21:17:04.442818   29617 main.go:141] libmachine: (ha-610874-m02) define libvirt domain using xml: 
	I1011 21:17:04.442835   29617 main.go:141] libmachine: (ha-610874-m02) <domain type='kvm'>
	I1011 21:17:04.442851   29617 main.go:141] libmachine: (ha-610874-m02)   <name>ha-610874-m02</name>
	I1011 21:17:04.442859   29617 main.go:141] libmachine: (ha-610874-m02)   <memory unit='MiB'>2200</memory>
	I1011 21:17:04.442867   29617 main.go:141] libmachine: (ha-610874-m02)   <vcpu>2</vcpu>
	I1011 21:17:04.442876   29617 main.go:141] libmachine: (ha-610874-m02)   <features>
	I1011 21:17:04.442884   29617 main.go:141] libmachine: (ha-610874-m02)     <acpi/>
	I1011 21:17:04.442894   29617 main.go:141] libmachine: (ha-610874-m02)     <apic/>
	I1011 21:17:04.442901   29617 main.go:141] libmachine: (ha-610874-m02)     <pae/>
	I1011 21:17:04.442909   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.442916   29617 main.go:141] libmachine: (ha-610874-m02)   </features>
	I1011 21:17:04.442924   29617 main.go:141] libmachine: (ha-610874-m02)   <cpu mode='host-passthrough'>
	I1011 21:17:04.442929   29617 main.go:141] libmachine: (ha-610874-m02)   
	I1011 21:17:04.442935   29617 main.go:141] libmachine: (ha-610874-m02)   </cpu>
	I1011 21:17:04.442940   29617 main.go:141] libmachine: (ha-610874-m02)   <os>
	I1011 21:17:04.442944   29617 main.go:141] libmachine: (ha-610874-m02)     <type>hvm</type>
	I1011 21:17:04.442949   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='cdrom'/>
	I1011 21:17:04.442953   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='hd'/>
	I1011 21:17:04.442958   29617 main.go:141] libmachine: (ha-610874-m02)     <bootmenu enable='no'/>
	I1011 21:17:04.442966   29617 main.go:141] libmachine: (ha-610874-m02)   </os>
	I1011 21:17:04.442970   29617 main.go:141] libmachine: (ha-610874-m02)   <devices>
	I1011 21:17:04.442975   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='cdrom'>
	I1011 21:17:04.442982   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/boot2docker.iso'/>
	I1011 21:17:04.442988   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hdc' bus='scsi'/>
	I1011 21:17:04.442992   29617 main.go:141] libmachine: (ha-610874-m02)       <readonly/>
	I1011 21:17:04.442999   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443009   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='disk'>
	I1011 21:17:04.443018   29617 main.go:141] libmachine: (ha-610874-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:17:04.443028   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk'/>
	I1011 21:17:04.443033   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hda' bus='virtio'/>
	I1011 21:17:04.443037   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443042   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443047   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='mk-ha-610874'/>
	I1011 21:17:04.443052   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443057   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443061   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443066   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='default'/>
	I1011 21:17:04.443071   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443076   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443080   29617 main.go:141] libmachine: (ha-610874-m02)     <serial type='pty'>
	I1011 21:17:04.443085   29617 main.go:141] libmachine: (ha-610874-m02)       <target port='0'/>
	I1011 21:17:04.443089   29617 main.go:141] libmachine: (ha-610874-m02)     </serial>
	I1011 21:17:04.443094   29617 main.go:141] libmachine: (ha-610874-m02)     <console type='pty'>
	I1011 21:17:04.443099   29617 main.go:141] libmachine: (ha-610874-m02)       <target type='serial' port='0'/>
	I1011 21:17:04.443103   29617 main.go:141] libmachine: (ha-610874-m02)     </console>
	I1011 21:17:04.443109   29617 main.go:141] libmachine: (ha-610874-m02)     <rng model='virtio'>
	I1011 21:17:04.443137   29617 main.go:141] libmachine: (ha-610874-m02)       <backend model='random'>/dev/random</backend>
	I1011 21:17:04.443157   29617 main.go:141] libmachine: (ha-610874-m02)     </rng>
	I1011 21:17:04.443167   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443173   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443189   29617 main.go:141] libmachine: (ha-610874-m02)   </devices>
	I1011 21:17:04.443198   29617 main.go:141] libmachine: (ha-610874-m02) </domain>
	I1011 21:17:04.443208   29617 main.go:141] libmachine: (ha-610874-m02) 
	I1011 21:17:04.449596   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f0:af:4d in network default
	I1011 21:17:04.450115   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring networks are active...
	I1011 21:17:04.450137   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:04.450871   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network default is active
	I1011 21:17:04.451172   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network mk-ha-610874 is active
	I1011 21:17:04.451696   29617 main.go:141] libmachine: (ha-610874-m02) Getting domain xml...
	I1011 21:17:04.452466   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:05.723228   29617 main.go:141] libmachine: (ha-610874-m02) Waiting to get IP...
	I1011 21:17:05.723997   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.724437   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.724489   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.724421   29991 retry.go:31] will retry after 216.617717ms: waiting for machine to come up
	I1011 21:17:05.943023   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.943470   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.943493   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.943418   29991 retry.go:31] will retry after 323.475706ms: waiting for machine to come up
	I1011 21:17:06.268759   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.269130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.269185   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.269071   29991 retry.go:31] will retry after 341.815784ms: waiting for machine to come up
	I1011 21:17:06.612587   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.613044   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.613069   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.612994   29991 retry.go:31] will retry after 575.567056ms: waiting for machine to come up
	I1011 21:17:07.189626   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.190024   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.190052   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.189979   29991 retry.go:31] will retry after 508.01524ms: waiting for machine to come up
	I1011 21:17:07.699512   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.699870   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.699896   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.699824   29991 retry.go:31] will retry after 706.438375ms: waiting for machine to come up
	I1011 21:17:08.408130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:08.408534   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:08.408553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:08.408491   29991 retry.go:31] will retry after 819.845939ms: waiting for machine to come up
	I1011 21:17:09.229809   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:09.230337   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:09.230361   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:09.230274   29991 retry.go:31] will retry after 1.08916769s: waiting for machine to come up
	I1011 21:17:10.320875   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:10.321316   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:10.321344   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:10.321274   29991 retry.go:31] will retry after 1.825013226s: waiting for machine to come up
	I1011 21:17:12.148418   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:12.148892   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:12.148912   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:12.148854   29991 retry.go:31] will retry after 1.911054739s: waiting for machine to come up
	I1011 21:17:14.062931   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:14.063353   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:14.063381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:14.063300   29991 retry.go:31] will retry after 2.512289875s: waiting for machine to come up
	I1011 21:17:16.577169   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:16.577555   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:16.577580   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:16.577519   29991 retry.go:31] will retry after 3.376491238s: waiting for machine to come up
	I1011 21:17:19.955606   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:19.955968   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:19.955995   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:19.955923   29991 retry.go:31] will retry after 4.049589987s: waiting for machine to come up
	I1011 21:17:24.010143   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010574   29617 main.go:141] libmachine: (ha-610874-m02) Found IP for machine: 192.168.39.11
	I1011 21:17:24.010593   29617 main.go:141] libmachine: (ha-610874-m02) Reserving static IP address...
	I1011 21:17:24.010602   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010971   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "ha-610874-m02", mac: "52:54:00:f3:cf:5a", ip: "192.168.39.11"} in network mk-ha-610874
	I1011 21:17:24.079043   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:24.079077   29617 main.go:141] libmachine: (ha-610874-m02) Reserved static IP address: 192.168.39.11
	I1011 21:17:24.079093   29617 main.go:141] libmachine: (ha-610874-m02) Waiting for SSH to be available...
	I1011 21:17:24.081543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.081867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874
	I1011 21:17:24.081880   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:f3:cf:5a
	I1011 21:17:24.082047   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:24.082076   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:24.082376   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:24.082572   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:24.082591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:24.086567   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:17:24.086597   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:17:24.086608   29617 main.go:141] libmachine: (ha-610874-m02) DBG | command : exit 0
	I1011 21:17:24.086627   29617 main.go:141] libmachine: (ha-610874-m02) DBG | err     : exit status 255
	I1011 21:17:24.086641   29617 main.go:141] libmachine: (ha-610874-m02) DBG | output  : 
	I1011 21:17:27.089089   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:27.091628   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.091976   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.092001   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.092162   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:27.092189   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:27.092213   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:27.092221   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:27.092230   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:27.218963   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: <nil>: 
	I1011 21:17:27.219245   29617 main.go:141] libmachine: (ha-610874-m02) KVM machine creation complete!
	I1011 21:17:27.219616   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:27.220149   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220344   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220511   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:17:27.220532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetState
	I1011 21:17:27.221755   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:17:27.221770   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:17:27.221778   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:17:27.221786   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.223867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224229   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.224267   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224374   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.224532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224655   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224768   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.224964   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.225164   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.225177   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:17:27.333813   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.333841   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:17:27.333852   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.336538   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.336885   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.336909   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.337071   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.337262   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337411   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337545   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.337696   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.337866   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.337878   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:17:27.447511   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:17:27.447576   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:17:27.447583   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:17:27.447590   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.447842   29617 buildroot.go:166] provisioning hostname "ha-610874-m02"
	I1011 21:17:27.447866   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.448033   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.450381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450763   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.450793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450924   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.451086   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451309   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451419   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.451547   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.451737   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.451749   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m02 && echo "ha-610874-m02" | sudo tee /etc/hostname
	I1011 21:17:27.572801   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m02
	
	I1011 21:17:27.572834   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.575352   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575751   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.575776   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575941   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.576093   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576220   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576346   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.576461   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.576637   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.576661   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:17:27.695886   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.695916   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:17:27.695938   29617 buildroot.go:174] setting up certificates
	I1011 21:17:27.695952   29617 provision.go:84] configureAuth start
	I1011 21:17:27.695968   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.696239   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:27.698924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699311   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.699342   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699459   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.701614   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.701924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.701942   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.702129   29617 provision.go:143] copyHostCerts
	I1011 21:17:27.702158   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702190   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:17:27.702199   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702263   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:17:27.702355   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702381   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:17:27.702389   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702438   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:17:27.702535   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702560   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:17:27.702567   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702604   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:17:27.702691   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m02 san=[127.0.0.1 192.168.39.11 ha-610874-m02 localhost minikube]
	I1011 21:17:27.916455   29617 provision.go:177] copyRemoteCerts
	I1011 21:17:27.916517   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:17:27.916546   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.919220   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919586   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.919612   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919767   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.919931   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.920084   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.920214   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.005137   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:17:28.005206   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:17:28.030798   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:17:28.030868   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:17:28.053929   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:17:28.053992   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:17:28.077344   29617 provision.go:87] duration metric: took 381.381213ms to configureAuth
	I1011 21:17:28.077368   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:17:28.077553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:28.077631   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.079998   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080363   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.080391   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080550   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.080711   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080860   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080957   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.081126   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.081276   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.081289   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:17:28.305072   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:17:28.305099   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:17:28.305107   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetURL
	I1011 21:17:28.306348   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using libvirt version 6000000
	I1011 21:17:28.308766   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309119   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.309148   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309322   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:17:28.309336   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:17:28.309345   29617 client.go:171] duration metric: took 24.190578436s to LocalClient.Create
	I1011 21:17:28.309369   29617 start.go:167] duration metric: took 24.190632715s to libmachine.API.Create "ha-610874"
	I1011 21:17:28.309380   29617 start.go:293] postStartSetup for "ha-610874-m02" (driver="kvm2")
	I1011 21:17:28.309393   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:17:28.309414   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.309649   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:17:28.309678   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.311900   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312234   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.312257   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312366   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.312513   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.312670   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.312813   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.401258   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:17:28.405713   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:17:28.405741   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:17:28.405819   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:17:28.405893   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:17:28.405901   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:17:28.405976   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:17:28.415792   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:28.439288   29617 start.go:296] duration metric: took 129.894011ms for postStartSetup
	I1011 21:17:28.439338   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:28.439884   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.442343   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442733   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.442761   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442929   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:28.443099   29617 start.go:128] duration metric: took 24.341953324s to createHost
	I1011 21:17:28.443119   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.445585   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.445871   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.445894   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.446037   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.446185   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446313   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446509   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.446712   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.446859   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.446869   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:17:28.555655   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681448.532334020
	
	I1011 21:17:28.555684   29617 fix.go:216] guest clock: 1728681448.532334020
	I1011 21:17:28.555698   29617 fix.go:229] Guest: 2024-10-11 21:17:28.53233402 +0000 UTC Remote: 2024-10-11 21:17:28.443109707 +0000 UTC m=+72.164953096 (delta=89.224313ms)
	I1011 21:17:28.555717   29617 fix.go:200] guest clock delta is within tolerance: 89.224313ms
	I1011 21:17:28.555723   29617 start.go:83] releasing machines lock for "ha-610874-m02", held for 24.454670186s
	I1011 21:17:28.555747   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.555979   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.558215   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.558576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.558610   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.560996   29617 out.go:177] * Found network options:
	I1011 21:17:28.562345   29617 out.go:177]   - NO_PROXY=192.168.39.10
	W1011 21:17:28.563437   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.563463   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.563914   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564081   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564167   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:17:28.564198   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	W1011 21:17:28.564293   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.564371   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:17:28.564394   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.566543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566887   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.566920   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566948   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567066   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567235   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567341   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.567349   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567359   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567462   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.567515   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567649   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567774   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567889   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.804794   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:17:28.816172   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:17:28.816234   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:17:28.833684   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:17:28.833707   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:17:28.833785   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:17:28.850682   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:17:28.865268   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:17:28.865314   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:17:28.879804   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:17:28.893790   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:17:29.005060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:17:29.161552   29617 docker.go:233] disabling docker service ...
	I1011 21:17:29.161623   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:17:29.176030   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:17:29.188905   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:17:29.314012   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:17:29.444969   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:17:29.458929   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:17:29.477279   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:17:29.477336   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.487485   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:17:29.487557   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.497725   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.508074   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.518078   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:17:29.528405   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.538441   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.555119   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.568308   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:17:29.578239   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:17:29.578297   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:17:29.591777   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:17:29.601766   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:29.733693   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:17:29.832686   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:17:29.832769   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:17:29.837474   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:17:29.837531   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:17:29.841328   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:17:29.885910   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:17:29.885997   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.915959   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.947445   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:17:29.948743   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:17:29.949776   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:29.952438   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952742   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:29.952767   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952926   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:17:29.957045   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:29.969401   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:17:29.969618   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:29.969904   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.969953   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:29.984875   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I1011 21:17:29.985308   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:29.985749   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:29.985772   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:29.986088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:29.986307   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:29.987951   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:29.988270   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.988309   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:30.002903   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I1011 21:17:30.003325   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:30.003771   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:30.003791   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:30.004088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:30.004322   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:30.004478   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.11
	I1011 21:17:30.004490   29617 certs.go:194] generating shared ca certs ...
	I1011 21:17:30.004507   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.004645   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:17:30.004706   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:17:30.004720   29617 certs.go:256] generating profile certs ...
	I1011 21:17:30.004812   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:17:30.004845   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a
	I1011 21:17:30.004865   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.254]
	I1011 21:17:30.068798   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a ...
	I1011 21:17:30.068829   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a: {Name:mk7e577273a37f1215e925a89aaf2057d9d70c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069010   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a ...
	I1011 21:17:30.069026   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a: {Name:mk272cb1eed2069075ccbf59f795f6618abcd353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069135   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:17:30.069298   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:17:30.069453   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:17:30.069470   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:17:30.069497   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:17:30.069514   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:17:30.069533   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:17:30.069553   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:17:30.069571   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:17:30.069589   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:17:30.069614   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:17:30.069674   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:17:30.069714   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:17:30.069727   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:17:30.069761   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:17:30.069795   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:17:30.069830   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:17:30.069888   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:30.069930   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.069950   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.069968   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.070008   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:30.073028   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073411   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:30.073439   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073677   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:30.073887   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:30.074102   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:30.074339   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:30.150977   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:17:30.155841   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:17:30.167973   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:17:30.172398   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:17:30.183178   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:17:30.187494   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:17:30.198396   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:17:30.202690   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:17:30.213924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:17:30.218228   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:17:30.229999   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:17:30.234409   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:17:30.246054   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:17:30.271630   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:17:30.295598   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:17:30.320158   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:17:30.346169   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1011 21:17:30.370669   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 21:17:30.396095   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:17:30.424361   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:17:30.449179   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:17:30.473592   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:17:30.497140   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:17:30.520773   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:17:30.537475   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:17:30.553696   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:17:30.573515   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:17:30.591050   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:17:30.607456   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:17:30.623663   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:17:30.639999   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:17:30.645863   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:17:30.656839   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661661   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661737   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.667927   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:17:30.678586   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:17:30.690465   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695106   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695178   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.700843   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:17:30.711530   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:17:30.722262   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726883   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726930   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.732484   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:17:30.743130   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:17:30.747324   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:17:30.747378   29617 kubeadm.go:934] updating node {m02 192.168.39.11 8443 v1.31.1 crio true true} ...
	I1011 21:17:30.747471   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:17:30.747503   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:17:30.747550   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:17:30.764827   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:17:30.764898   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:17:30.764958   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.774946   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:17:30.775004   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.785084   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:17:30.785115   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785173   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785210   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1011 21:17:30.785254   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1011 21:17:30.789999   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:17:30.790028   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:17:31.801070   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.801149   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.806312   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:17:31.806341   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:17:31.977093   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:17:32.035477   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.035590   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.049208   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:17:32.049241   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:17:32.383282   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:17:32.393090   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1011 21:17:32.409524   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:17:32.426347   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:17:32.443202   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:17:32.447193   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:32.459719   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:32.593682   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:32.611619   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:32.611941   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:32.611988   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:32.626650   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1011 21:17:32.627104   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:32.627665   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:32.627681   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:32.627997   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:32.628209   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:32.628355   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:17:32.628464   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:17:32.628490   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:32.631170   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631565   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:32.631594   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631751   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:32.631931   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:32.632068   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:32.632206   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:32.785858   29617 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:32.785905   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443"
	I1011 21:17:54.047983   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443": (21.262048482s)
	I1011 21:17:54.048020   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:17:54.524404   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m02 minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:17:54.662523   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:17:54.782630   29617 start.go:319] duration metric: took 22.154260063s to joinCluster
	I1011 21:17:54.782703   29617 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:54.782988   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:54.784979   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:17:54.786144   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:55.109738   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:55.128457   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:55.128804   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:17:55.128882   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:17:55.129129   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:17:55.129231   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.129241   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.129252   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.129258   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.140234   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:55.629803   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.629830   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.629841   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.629847   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.633275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.129516   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.129541   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.129552   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.129559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.132902   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.629511   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.629534   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.634698   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:17:57.129572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.129597   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.129605   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.129609   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.132668   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:57.133230   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:57.629639   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.629659   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.629667   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.629670   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.632880   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:58.129393   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.129417   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.129441   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.129446   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.132403   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:58.629999   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.630018   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.630026   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.630030   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.633746   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.130079   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.130096   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.130104   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.130108   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.133281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.133973   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:59.629323   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.629347   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.629358   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.629364   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.632796   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.129728   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.129749   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.129758   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.129767   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.133151   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.629977   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.630003   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.630015   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.630021   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.633099   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.130138   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.130160   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.130171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.130182   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.133307   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.134143   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:01.630135   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.630158   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.630171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.630177   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.634516   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:02.129957   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.129977   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.129985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.129990   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.209108   29617 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I1011 21:18:02.630223   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.630241   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.630249   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.630254   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.633360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:03.130145   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.130165   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.130172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.130176   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.134521   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:03.135482   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:03.630325   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.630348   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.630357   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.630363   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.633906   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.129848   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.129869   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.129880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.129885   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.133353   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.630352   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.630378   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.630391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.630395   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.633784   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:05.129622   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.129647   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.129658   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.129664   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.174718   29617 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1011 21:18:05.175206   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:05.629573   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.629601   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.629610   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.629614   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.633377   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.129366   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.129388   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.129396   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.129399   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.132592   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.630152   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.630174   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.630184   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.630190   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.633604   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.130251   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.130280   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.130292   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.130299   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.133640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.629546   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.629568   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.629578   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.629583   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.632932   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.633891   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:08.129786   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.129803   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.129811   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.129815   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.133290   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:08.629506   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.629533   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.633075   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.129541   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.129559   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.129567   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.129572   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.132640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.629665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.629684   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.629692   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.629697   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.632858   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:10.129866   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.129885   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.129893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.129897   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.132615   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:10.133150   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:10.629443   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.629475   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.629489   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.629493   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.632970   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.130002   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.130024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.130032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.130035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.133677   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.629439   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.629465   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.629477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.629482   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.632816   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.130049   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.130071   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.130080   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.130083   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.133179   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.133716   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:12.630085   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.630110   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.630121   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.630127   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.633114   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:13.130226   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.130245   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.130253   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.130258   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.133707   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:13.629976   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.630005   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.630016   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.630022   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.633601   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.129823   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.129846   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.129857   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.129863   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.132927   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.630032   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.630053   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.630062   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.630070   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.633208   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.633750   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:15.129885   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.129909   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.129919   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.129924   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.132958   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.630000   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.630024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.630032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.630035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.632986   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.633633   29617 node_ready.go:49] node "ha-610874-m02" has status "Ready":"True"
	I1011 21:18:15.633647   29617 node_ready.go:38] duration metric: took 20.504503338s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:18:15.633655   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:15.633709   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:15.633718   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.633724   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.633728   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.637582   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.643886   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.643972   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:18:15.643983   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.643993   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.643999   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.646763   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.647514   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.647529   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.647536   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.647539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.649945   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.650586   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.650602   29617 pod_ready.go:82] duration metric: took 6.694777ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650623   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650679   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:18:15.650688   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.650699   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.650707   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.652943   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.653673   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.653687   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.653696   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.653701   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.655886   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.656382   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.656397   29617 pod_ready.go:82] duration metric: took 5.765488ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656405   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656451   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:18:15.656461   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.656471   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.656477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.658729   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.659391   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.659409   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.659419   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.659426   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.661629   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.662114   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.662130   29617 pod_ready.go:82] duration metric: took 5.719352ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662137   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662181   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:18:15.662190   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.662197   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.662201   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.664800   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.665273   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.665286   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.665294   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.665298   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.667272   29617 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1011 21:18:15.667736   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.667757   29617 pod_ready.go:82] duration metric: took 5.613486ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.667773   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.830074   29617 request.go:632] Waited for 162.243136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830160   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830168   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.830178   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.830188   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.833590   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.030666   29617 request.go:632] Waited for 196.378996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030722   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030728   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.030735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.030739   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.033962   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.034580   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.034599   29617 pod_ready.go:82] duration metric: took 366.81416ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.034608   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.230672   29617 request.go:632] Waited for 195.982779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230778   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230790   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.230801   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.230810   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.234030   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.430609   29617 request.go:632] Waited for 195.69013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430701   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430712   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.430735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.433742   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.434219   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.434239   29617 pod_ready.go:82] duration metric: took 399.609699ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.434252   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.630260   29617 request.go:632] Waited for 195.941074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630337   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630342   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.630350   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.630357   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.633657   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.830752   29617 request.go:632] Waited for 196.369395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830804   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830811   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.830820   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.830827   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.833807   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.834437   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.834455   29617 pod_ready.go:82] duration metric: took 400.195609ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.834465   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.030516   29617 request.go:632] Waited for 195.993213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030589   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030595   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.030607   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.030627   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.034122   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.230257   29617 request.go:632] Waited for 195.302255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230322   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230329   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.230337   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.230342   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.233560   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.234217   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.234239   29617 pod_ready.go:82] duration metric: took 399.767293ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.234256   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.430433   29617 request.go:632] Waited for 196.107897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430509   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430515   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.430526   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.430534   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.434262   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.630356   29617 request.go:632] Waited for 195.345057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630426   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630431   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.630439   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.630444   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.633591   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.634036   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.634054   29617 pod_ready.go:82] duration metric: took 399.790817ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.634064   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.830520   29617 request.go:632] Waited for 196.385742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830591   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830596   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.830603   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.830607   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.833974   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.030999   29617 request.go:632] Waited for 196.369359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031062   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031068   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.031075   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.031079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.034522   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.035045   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.035060   29617 pod_ready.go:82] duration metric: took 400.990689ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.035069   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.230101   29617 request.go:632] Waited for 194.964535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230173   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230179   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.230187   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.230191   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.233153   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:18.430174   29617 request.go:632] Waited for 196.304225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430252   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430258   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.430265   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.430271   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.433684   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.434857   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.434876   29617 pod_ready.go:82] duration metric: took 399.800525ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.434886   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.630997   29617 request.go:632] Waited for 196.051862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631067   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631072   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.631079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.631090   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.634569   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.830555   29617 request.go:632] Waited for 195.378028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830645   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830652   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.830659   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.830665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.834017   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.834881   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.834901   29617 pod_ready.go:82] duration metric: took 400.009355ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.834913   29617 pod_ready.go:39] duration metric: took 3.201246724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:18.834925   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:18:18.834977   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:18:18.851851   29617 api_server.go:72] duration metric: took 24.069111498s to wait for apiserver process to appear ...
	I1011 21:18:18.851878   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:18:18.851897   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:18:18.856543   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:18:18.856610   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:18:18.856615   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.856622   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.856626   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.857613   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:18:18.857701   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:18:18.857721   29617 api_server.go:131] duration metric: took 5.836547ms to wait for apiserver health ...
	I1011 21:18:18.857730   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:18:19.030066   29617 request.go:632] Waited for 172.254223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030130   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030136   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.030143   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.030148   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.034696   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.039508   29617 system_pods.go:59] 17 kube-system pods found
	I1011 21:18:19.039540   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.039546   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.039551   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.039557   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.039561   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.039566   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.039570   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.039579   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.039584   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.039592   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.039597   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.039601   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.039606   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.039612   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.039615   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.039619   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.039622   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.039631   29617 system_pods.go:74] duration metric: took 181.896084ms to wait for pod list to return data ...
	I1011 21:18:19.039640   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:18:19.230981   29617 request.go:632] Waited for 191.269571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231051   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231057   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.231064   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.231067   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.235209   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.235407   29617 default_sa.go:45] found service account: "default"
	I1011 21:18:19.235421   29617 default_sa.go:55] duration metric: took 195.775642ms for default service account to be created ...
	I1011 21:18:19.235428   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:18:19.430605   29617 request.go:632] Waited for 195.123077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430704   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430710   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.430718   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.435793   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:18:19.439894   29617 system_pods.go:86] 17 kube-system pods found
	I1011 21:18:19.439921   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.439929   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.439935   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.439942   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.439947   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.439953   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.439959   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.439965   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.439972   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.439980   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.439986   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.439995   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.440002   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.440010   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.440016   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.440020   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.440025   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.440033   29617 system_pods.go:126] duration metric: took 204.599583ms to wait for k8s-apps to be running ...
	I1011 21:18:19.440045   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:18:19.440094   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:19.455815   29617 system_svc.go:56] duration metric: took 15.763998ms WaitForService to wait for kubelet
	I1011 21:18:19.455841   29617 kubeadm.go:582] duration metric: took 24.673107672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:18:19.455860   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:18:19.630302   29617 request.go:632] Waited for 174.358774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630357   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630364   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.630372   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.630379   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.634356   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:19.635316   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635343   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635358   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635363   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635371   29617 node_conditions.go:105] duration metric: took 179.50548ms to run NodePressure ...
	I1011 21:18:19.635384   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:18:19.635415   29617 start.go:255] writing updated cluster config ...
	I1011 21:18:19.637553   29617 out.go:201] 
	I1011 21:18:19.638933   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:19.639018   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.640415   29617 out.go:177] * Starting "ha-610874-m03" control-plane node in "ha-610874" cluster
	I1011 21:18:19.641511   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:18:19.641529   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:18:19.641627   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:18:19.641638   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:18:19.641712   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.641856   29617 start.go:360] acquireMachinesLock for ha-610874-m03: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:18:19.641897   29617 start.go:364] duration metric: took 24.129µs to acquireMachinesLock for "ha-610874-m03"
	I1011 21:18:19.641912   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:19.642000   29617 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1011 21:18:19.643322   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:18:19.643394   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:19.643424   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:19.657905   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I1011 21:18:19.658394   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:19.658868   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:19.658887   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:19.659186   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:19.659360   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:19.659497   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:19.659661   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:18:19.659689   29617 client.go:168] LocalClient.Create starting
	I1011 21:18:19.659716   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:18:19.659744   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659756   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659802   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:18:19.659820   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659830   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659844   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:18:19.659851   29617 main.go:141] libmachine: (ha-610874-m03) Calling .PreCreateCheck
	I1011 21:18:19.659994   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:19.660351   29617 main.go:141] libmachine: Creating machine...
	I1011 21:18:19.660362   29617 main.go:141] libmachine: (ha-610874-m03) Calling .Create
	I1011 21:18:19.660504   29617 main.go:141] libmachine: (ha-610874-m03) Creating KVM machine...
	I1011 21:18:19.661678   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing default KVM network
	I1011 21:18:19.661785   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing private KVM network mk-ha-610874
	I1011 21:18:19.661907   29617 main.go:141] libmachine: (ha-610874-m03) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.661930   29617 main.go:141] libmachine: (ha-610874-m03) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:18:19.662023   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.661913   30793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.662086   29617 main.go:141] libmachine: (ha-610874-m03) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:18:19.893907   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.893764   30793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa...
	I1011 21:18:19.985249   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985139   30793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk...
	I1011 21:18:19.985285   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing magic tar header
	I1011 21:18:19.985300   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing SSH key tar header
	I1011 21:18:19.985311   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985257   30793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.985329   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03
	I1011 21:18:19.985350   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 (perms=drwx------)
	I1011 21:18:19.985373   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:18:19.985396   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.985411   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:18:19.985426   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:18:19.985434   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:18:19.985440   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:18:19.985456   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:18:19.985468   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:18:19.985478   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:18:19.985499   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:18:19.985509   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:19.985516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home
	I1011 21:18:19.985526   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Skipping /home - not owner
	I1011 21:18:19.986460   29617 main.go:141] libmachine: (ha-610874-m03) define libvirt domain using xml: 
	I1011 21:18:19.986487   29617 main.go:141] libmachine: (ha-610874-m03) <domain type='kvm'>
	I1011 21:18:19.986497   29617 main.go:141] libmachine: (ha-610874-m03)   <name>ha-610874-m03</name>
	I1011 21:18:19.986505   29617 main.go:141] libmachine: (ha-610874-m03)   <memory unit='MiB'>2200</memory>
	I1011 21:18:19.986513   29617 main.go:141] libmachine: (ha-610874-m03)   <vcpu>2</vcpu>
	I1011 21:18:19.986528   29617 main.go:141] libmachine: (ha-610874-m03)   <features>
	I1011 21:18:19.986539   29617 main.go:141] libmachine: (ha-610874-m03)     <acpi/>
	I1011 21:18:19.986547   29617 main.go:141] libmachine: (ha-610874-m03)     <apic/>
	I1011 21:18:19.986559   29617 main.go:141] libmachine: (ha-610874-m03)     <pae/>
	I1011 21:18:19.986567   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.986578   29617 main.go:141] libmachine: (ha-610874-m03)   </features>
	I1011 21:18:19.986587   29617 main.go:141] libmachine: (ha-610874-m03)   <cpu mode='host-passthrough'>
	I1011 21:18:19.986598   29617 main.go:141] libmachine: (ha-610874-m03)   
	I1011 21:18:19.986605   29617 main.go:141] libmachine: (ha-610874-m03)   </cpu>
	I1011 21:18:19.986657   29617 main.go:141] libmachine: (ha-610874-m03)   <os>
	I1011 21:18:19.986683   29617 main.go:141] libmachine: (ha-610874-m03)     <type>hvm</type>
	I1011 21:18:19.986694   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='cdrom'/>
	I1011 21:18:19.986706   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='hd'/>
	I1011 21:18:19.986714   29617 main.go:141] libmachine: (ha-610874-m03)     <bootmenu enable='no'/>
	I1011 21:18:19.986723   29617 main.go:141] libmachine: (ha-610874-m03)   </os>
	I1011 21:18:19.986733   29617 main.go:141] libmachine: (ha-610874-m03)   <devices>
	I1011 21:18:19.986743   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='cdrom'>
	I1011 21:18:19.986759   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/boot2docker.iso'/>
	I1011 21:18:19.986773   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hdc' bus='scsi'/>
	I1011 21:18:19.986784   29617 main.go:141] libmachine: (ha-610874-m03)       <readonly/>
	I1011 21:18:19.986793   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986804   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='disk'>
	I1011 21:18:19.986816   29617 main.go:141] libmachine: (ha-610874-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:18:19.986831   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk'/>
	I1011 21:18:19.986840   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hda' bus='virtio'/>
	I1011 21:18:19.986871   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986898   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986911   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='mk-ha-610874'/>
	I1011 21:18:19.986922   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986933   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986941   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986948   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='default'/>
	I1011 21:18:19.986962   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986972   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986987   29617 main.go:141] libmachine: (ha-610874-m03)     <serial type='pty'>
	I1011 21:18:19.986999   29617 main.go:141] libmachine: (ha-610874-m03)       <target port='0'/>
	I1011 21:18:19.987006   29617 main.go:141] libmachine: (ha-610874-m03)     </serial>
	I1011 21:18:19.987015   29617 main.go:141] libmachine: (ha-610874-m03)     <console type='pty'>
	I1011 21:18:19.987025   29617 main.go:141] libmachine: (ha-610874-m03)       <target type='serial' port='0'/>
	I1011 21:18:19.987033   29617 main.go:141] libmachine: (ha-610874-m03)     </console>
	I1011 21:18:19.987052   29617 main.go:141] libmachine: (ha-610874-m03)     <rng model='virtio'>
	I1011 21:18:19.987060   29617 main.go:141] libmachine: (ha-610874-m03)       <backend model='random'>/dev/random</backend>
	I1011 21:18:19.987068   29617 main.go:141] libmachine: (ha-610874-m03)     </rng>
	I1011 21:18:19.987076   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987087   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987095   29617 main.go:141] libmachine: (ha-610874-m03)   </devices>
	I1011 21:18:19.987107   29617 main.go:141] libmachine: (ha-610874-m03) </domain>
	I1011 21:18:19.987120   29617 main.go:141] libmachine: (ha-610874-m03) 
	I1011 21:18:19.993869   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:ec:a1:8a in network default
	I1011 21:18:19.994634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:19.994661   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring networks are active...
	I1011 21:18:19.995468   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network default is active
	I1011 21:18:19.995798   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network mk-ha-610874 is active
	I1011 21:18:19.996173   29617 main.go:141] libmachine: (ha-610874-m03) Getting domain xml...
	I1011 21:18:19.996928   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:21.254226   29617 main.go:141] libmachine: (ha-610874-m03) Waiting to get IP...
	I1011 21:18:21.254939   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.255287   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.255333   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.255277   30793 retry.go:31] will retry after 299.921958ms: waiting for machine to come up
	I1011 21:18:21.557116   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.557606   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.557634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.557554   30793 retry.go:31] will retry after 286.000289ms: waiting for machine to come up
	I1011 21:18:21.844948   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.845467   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.845490   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.845417   30793 retry.go:31] will retry after 387.119662ms: waiting for machine to come up
	I1011 21:18:22.233861   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.234347   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.234371   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.234316   30793 retry.go:31] will retry after 432.218769ms: waiting for machine to come up
	I1011 21:18:22.667570   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.668013   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.668044   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.667960   30793 retry.go:31] will retry after 681.692732ms: waiting for machine to come up
	I1011 21:18:23.350671   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:23.351087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:23.351114   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:23.351059   30793 retry.go:31] will retry after 838.189989ms: waiting for machine to come up
	I1011 21:18:24.191008   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:24.191479   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:24.191510   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:24.191434   30793 retry.go:31] will retry after 815.751815ms: waiting for machine to come up
	I1011 21:18:25.008738   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:25.009063   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:25.009087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:25.009033   30793 retry.go:31] will retry after 1.238801147s: waiting for machine to come up
	I1011 21:18:26.249732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:26.250130   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:26.250160   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:26.250077   30793 retry.go:31] will retry after 1.384996284s: waiting for machine to come up
	I1011 21:18:27.636107   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:27.636581   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:27.636616   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:27.636560   30793 retry.go:31] will retry after 2.228451179s: waiting for machine to come up
	I1011 21:18:29.866214   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:29.866564   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:29.866592   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:29.866517   30793 retry.go:31] will retry after 2.670642081s: waiting for machine to come up
	I1011 21:18:32.539631   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:32.539928   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:32.539955   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:32.539912   30793 retry.go:31] will retry after 2.348031686s: waiting for machine to come up
	I1011 21:18:34.889816   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:34.890238   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:34.890284   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:34.890163   30793 retry.go:31] will retry after 4.066011924s: waiting for machine to come up
	I1011 21:18:38.960327   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:38.960729   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:38.960754   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:38.960678   30793 retry.go:31] will retry after 5.543915191s: waiting for machine to come up
	I1011 21:18:44.509752   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510179   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has current primary IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510202   29617 main.go:141] libmachine: (ha-610874-m03) Found IP for machine: 192.168.39.222
	I1011 21:18:44.510223   29617 main.go:141] libmachine: (ha-610874-m03) Reserving static IP address...
	I1011 21:18:44.510657   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find host DHCP lease matching {name: "ha-610874-m03", mac: "52:54:00:54:11:ff", ip: "192.168.39.222"} in network mk-ha-610874
	I1011 21:18:44.581123   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Getting to WaitForSSH function...
	I1011 21:18:44.581152   29617 main.go:141] libmachine: (ha-610874-m03) Reserved static IP address: 192.168.39.222
	I1011 21:18:44.581189   29617 main.go:141] libmachine: (ha-610874-m03) Waiting for SSH to be available...
	I1011 21:18:44.584495   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585006   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.585034   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585216   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH client type: external
	I1011 21:18:44.585245   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa (-rw-------)
	I1011 21:18:44.585269   29617 main.go:141] libmachine: (ha-610874-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:18:44.585288   29617 main.go:141] libmachine: (ha-610874-m03) DBG | About to run SSH command:
	I1011 21:18:44.585303   29617 main.go:141] libmachine: (ha-610874-m03) DBG | exit 0
	I1011 21:18:44.714704   29617 main.go:141] libmachine: (ha-610874-m03) DBG | SSH cmd err, output: <nil>: 
	I1011 21:18:44.714970   29617 main.go:141] libmachine: (ha-610874-m03) KVM machine creation complete!
	I1011 21:18:44.715289   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:44.715822   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.715996   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.716157   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:18:44.716172   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetState
	I1011 21:18:44.717356   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:18:44.717371   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:18:44.717376   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:18:44.717382   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.719703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.719994   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.720030   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.720182   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.720357   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720507   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.720910   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.721104   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.721116   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:18:44.833939   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:44.833957   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:18:44.833964   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.836658   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837043   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.837069   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837281   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.837454   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837581   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837720   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.837855   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.838048   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.838063   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:18:44.951348   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:18:44.951417   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:18:44.951426   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:18:44.951433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951662   29617 buildroot.go:166] provisioning hostname "ha-610874-m03"
	I1011 21:18:44.951688   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951865   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.954732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955115   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.955139   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955310   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.955477   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955769   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.955914   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.956070   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.956081   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m03 && echo "ha-610874-m03" | sudo tee /etc/hostname
	I1011 21:18:45.085832   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m03
	
	I1011 21:18:45.085866   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.088705   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089140   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.089165   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089355   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.089596   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089767   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089921   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.090058   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.090210   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.090224   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:18:45.213456   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:45.213485   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:18:45.213503   29617 buildroot.go:174] setting up certificates
	I1011 21:18:45.213511   29617 provision.go:84] configureAuth start
	I1011 21:18:45.213520   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:45.213850   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.216516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.216909   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.216945   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.217058   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.219374   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219692   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.219725   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219870   29617 provision.go:143] copyHostCerts
	I1011 21:18:45.219895   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.219927   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:18:45.219936   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.220002   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:18:45.220073   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220091   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:18:45.220098   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220120   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:18:45.220162   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220179   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:18:45.220186   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220212   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:18:45.220261   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m03 san=[127.0.0.1 192.168.39.222 ha-610874-m03 localhost minikube]
	I1011 21:18:45.381567   29617 provision.go:177] copyRemoteCerts
	I1011 21:18:45.381648   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:18:45.381676   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.384744   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385058   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.385090   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385241   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.385433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.385594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.385733   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.474156   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:18:45.474223   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:18:45.499839   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:18:45.499913   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:18:45.523935   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:18:45.524000   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:18:45.546732   29617 provision.go:87] duration metric: took 333.208457ms to configureAuth
	I1011 21:18:45.546761   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:18:45.546986   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:45.547077   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.549423   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549746   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.549774   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549963   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.550145   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550309   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550436   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.550559   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.550750   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.550765   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:18:45.793129   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:18:45.793158   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:18:45.793166   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetURL
	I1011 21:18:45.794426   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using libvirt version 6000000
	I1011 21:18:45.796703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797072   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.797104   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797300   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:18:45.797313   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:18:45.797320   29617 client.go:171] duration metric: took 26.137622442s to LocalClient.Create
	I1011 21:18:45.797348   29617 start.go:167] duration metric: took 26.137680612s to libmachine.API.Create "ha-610874"
	I1011 21:18:45.797358   29617 start.go:293] postStartSetup for "ha-610874-m03" (driver="kvm2")
	I1011 21:18:45.797373   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:18:45.797391   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:45.797597   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:18:45.797632   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.799512   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799830   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.799859   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799989   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.800143   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.800296   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.800459   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.889596   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:18:45.893814   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:18:45.893840   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:18:45.893920   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:18:45.893992   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:18:45.894000   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:18:45.894078   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:18:45.903909   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:45.928066   29617 start.go:296] duration metric: took 130.695494ms for postStartSetup
	I1011 21:18:45.928125   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:45.928694   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.931370   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.931736   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.931757   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.932008   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:45.932227   29617 start.go:128] duration metric: took 26.290217466s to createHost
	I1011 21:18:45.932255   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.934599   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.934957   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.934980   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.935141   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.935302   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935450   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.935755   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.935906   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.935915   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:18:46.051363   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681526.030608830
	
	I1011 21:18:46.051382   29617 fix.go:216] guest clock: 1728681526.030608830
	I1011 21:18:46.051389   29617 fix.go:229] Guest: 2024-10-11 21:18:46.03060883 +0000 UTC Remote: 2024-10-11 21:18:45.932240932 +0000 UTC m=+149.654084325 (delta=98.367898ms)
	I1011 21:18:46.051403   29617 fix.go:200] guest clock delta is within tolerance: 98.367898ms
	I1011 21:18:46.051408   29617 start.go:83] releasing machines lock for "ha-610874-m03", held for 26.409503393s
	I1011 21:18:46.051425   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.051638   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:46.054103   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.054465   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.054484   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.056759   29617 out.go:177] * Found network options:
	I1011 21:18:46.058108   29617 out.go:177]   - NO_PROXY=192.168.39.10,192.168.39.11
	W1011 21:18:46.059377   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.059397   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.059412   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.059861   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060012   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060103   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:18:46.060140   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	W1011 21:18:46.060197   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.060218   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.060273   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:18:46.060291   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:46.062781   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063134   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063156   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063177   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063332   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063533   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.063672   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.063695   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063722   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063809   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063917   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.063937   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.064070   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.064193   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.315238   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:18:46.321537   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:18:46.321622   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:18:46.338777   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:18:46.338801   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:18:46.338861   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:18:46.354279   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:18:46.367905   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:18:46.367951   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:18:46.382395   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:18:46.395784   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:18:46.527698   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:18:46.689393   29617 docker.go:233] disabling docker service ...
	I1011 21:18:46.689462   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:18:46.704203   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:18:46.717422   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:18:46.835539   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:18:46.954100   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:18:46.969007   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:18:46.988391   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:18:46.988466   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:46.998736   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:18:46.998798   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.011000   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.020896   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.032139   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:18:47.042674   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.053148   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.070001   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.079898   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:18:47.089404   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:18:47.089464   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:18:47.101955   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:18:47.111372   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:47.225475   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:18:47.314226   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:18:47.314298   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:18:47.318974   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:18:47.319034   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:18:47.322683   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:18:47.363256   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:18:47.363346   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.390105   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.420312   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:18:47.421976   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:18:47.423450   29617 out.go:177]   - env NO_PROXY=192.168.39.10,192.168.39.11
	I1011 21:18:47.424609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:47.427015   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427408   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:47.427435   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427580   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:18:47.432290   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:47.445118   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:18:47.445341   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:47.445588   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.445623   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.460772   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I1011 21:18:47.461253   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.461758   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.461778   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.462071   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.462258   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:18:47.463800   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:47.464063   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.464094   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.478835   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1011 21:18:47.479190   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.479632   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.479653   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.479922   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.480090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:47.480267   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.222
	I1011 21:18:47.480276   29617 certs.go:194] generating shared ca certs ...
	I1011 21:18:47.480289   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.480440   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:18:47.480492   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:18:47.480504   29617 certs.go:256] generating profile certs ...
	I1011 21:18:47.480599   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:18:47.480632   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda
	I1011 21:18:47.480651   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.222 192.168.39.254]
	I1011 21:18:47.766344   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda ...
	I1011 21:18:47.766372   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda: {Name:mk781938e611c805d4d3614e2a3753b43a334879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766558   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda ...
	I1011 21:18:47.766576   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda: {Name:mk730a6176bc0314778375ee5435bf733e13e8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766701   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:18:47.766854   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:18:47.767020   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:18:47.767039   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:18:47.767069   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:18:47.767088   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:18:47.767105   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:18:47.767122   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:18:47.767138   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:18:47.767155   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:18:47.790727   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:18:47.790840   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:18:47.790890   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:18:47.790900   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:18:47.790934   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:18:47.790968   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:18:47.791002   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:18:47.791046   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:47.791074   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:18:47.791090   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:47.791103   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:18:47.791139   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:47.794048   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794490   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:47.794521   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794666   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:47.794865   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:47.795021   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:47.795166   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:47.874924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:18:47.879896   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:18:47.890508   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:18:47.894884   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:18:47.906444   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:18:47.911071   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:18:47.924640   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:18:47.929130   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:18:47.939543   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:18:47.943420   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:18:47.952418   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:18:47.956156   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:18:47.965542   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:18:47.990672   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:18:48.018655   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:18:48.046638   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:18:48.075087   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1011 21:18:48.099261   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1011 21:18:48.125316   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:18:48.150810   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:18:48.176240   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:18:48.202437   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:18:48.228304   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:18:48.250733   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:18:48.267330   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:18:48.284282   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:18:48.300414   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:18:48.317312   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:18:48.334266   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:18:48.350540   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:18:48.366454   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:18:48.371903   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:18:48.382259   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386521   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386558   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.392096   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:18:48.402476   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:18:48.414951   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420157   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420212   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.426147   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:18:48.437228   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:18:48.447706   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452447   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452490   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.457944   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:18:48.469558   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:18:48.473684   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:18:48.473727   29617 kubeadm.go:934] updating node {m03 192.168.39.222 8443 v1.31.1 crio true true} ...
	I1011 21:18:48.473800   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:18:48.473821   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:18:48.473848   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:18:48.489435   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:18:48.489512   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:18:48.489571   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.499111   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:18:48.499166   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:18:48.509200   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509211   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1011 21:18:48.509233   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509250   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509288   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509215   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:48.517849   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:18:48.517877   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:18:48.530466   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.530534   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:18:48.530551   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:18:48.530575   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.584347   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:18:48.584388   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:18:49.359545   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:18:49.369067   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1011 21:18:49.386375   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:18:49.402697   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:18:49.419546   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:18:49.424269   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:49.437035   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:49.561710   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:18:49.579907   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:49.580262   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:49.580306   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:49.596329   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I1011 21:18:49.596782   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:49.597244   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:49.597267   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:49.597574   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:49.597761   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:49.597902   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:18:49.598045   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:18:49.598061   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:49.601098   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601584   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:49.601613   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601735   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:49.601902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:49.602044   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:49.602182   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:49.765636   29617 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:49.765692   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I1011 21:19:12.027662   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (22.261919257s)
	I1011 21:19:12.027723   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:19:12.601287   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m03 minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:19:12.730357   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:19:12.852046   29617 start.go:319] duration metric: took 23.254138834s to joinCluster
	I1011 21:19:12.852173   29617 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:19:12.852553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:19:12.853928   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:19:12.855524   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:19:13.141318   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:19:13.175499   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:19:13.175739   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:19:13.175813   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:19:13.176040   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:13.176203   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.176216   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.176230   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.176236   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.180062   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:13.676530   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.676550   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.676559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.676563   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.680629   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.176763   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.176790   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.176802   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.176813   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.181595   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.676942   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.676962   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.676971   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.676974   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.680092   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.177198   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.177232   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.177243   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.177251   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.181013   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.181507   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:15.676949   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.676975   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.676985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.676991   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.680404   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.176381   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.176401   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.176411   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.176416   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.179611   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.676230   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.676253   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.676264   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.676269   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.679007   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.176965   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.176991   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.177003   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.177010   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.179578   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.677212   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.677239   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.677250   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.677257   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.680848   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:17.681529   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:18.176617   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.176642   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.176652   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.176657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.180501   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:18.676324   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.676344   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.676352   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.676356   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.680172   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:19.176785   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.176805   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.176813   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.176817   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.180917   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:19.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.676229   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.676239   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.676247   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.679537   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:20.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.176578   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.176586   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.176590   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.180852   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:20.181655   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:20.676981   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.677001   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.677010   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.677013   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.680773   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.176665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.176687   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.176695   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.176698   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.180326   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.677105   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.677131   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.677143   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.677150   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.680523   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:22.176275   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.176296   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.176305   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.176311   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.180665   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:22.181892   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:22.677209   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.677234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.677254   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.677260   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.680867   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.177040   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.177059   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.177067   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.177072   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.180354   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.676494   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.676523   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.676533   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.676539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.679890   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.177143   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.177165   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.177172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.177178   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.181118   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.182010   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:24.677149   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.677167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.677176   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.677179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.681310   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.176839   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.176861   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.176869   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.176875   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.181361   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.676226   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.676235   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.676238   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.679734   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.176896   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.176927   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.176938   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.176942   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.180665   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.676529   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.676556   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.676567   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.676574   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.679852   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.680538   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:27.176980   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.177000   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.177008   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.177011   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.180641   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:27.676837   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.676865   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.676876   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.676883   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.680097   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.177112   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.177134   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.177145   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.177152   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.180461   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.676318   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.676339   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.676347   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.676351   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.680275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.680843   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:29.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.176576   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.176584   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.176589   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.180006   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:29.676572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.676591   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.676601   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.676608   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.679885   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.176623   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.176647   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.176655   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.176660   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.180360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.676414   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.676442   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.676454   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.676462   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.679795   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.176596   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.176622   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.176632   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.176638   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.180174   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.180775   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:31.676625   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.676645   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.676653   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.676657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.679755   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.176832   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.176853   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.176861   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.176866   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.180709   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.676943   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.676966   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.676975   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.676979   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.680453   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.176289   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.176309   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.176317   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.176323   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179239   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:33.179746   29617 node_ready.go:49] node "ha-610874-m03" has status "Ready":"True"
	I1011 21:19:33.179763   29617 node_ready.go:38] duration metric: took 20.003708199s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:33.179771   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:33.179838   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:33.179846   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.179852   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179856   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.189958   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.199406   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.199502   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:19:33.199514   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.199523   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.199531   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.209887   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.210687   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.210702   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.210713   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.210717   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.217280   29617 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1011 21:19:33.217765   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.217784   29617 pod_ready.go:82] duration metric: took 18.353705ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217795   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217867   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:19:33.217877   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.217887   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.217892   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.223080   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:33.223812   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.223824   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.223831   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.223835   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.230872   29617 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1011 21:19:33.231311   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.231329   29617 pod_ready.go:82] duration metric: took 13.526998ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231340   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231407   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:19:33.231416   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.231425   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.231433   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.241511   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.242134   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.242152   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.242161   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.242167   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.246996   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.247556   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.247579   29617 pod_ready.go:82] duration metric: took 16.22432ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247588   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247649   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:19:33.247658   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.247665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.247671   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.251040   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.251793   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:33.251812   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.251824   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.251833   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.256535   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.256972   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.256988   29617 pod_ready.go:82] duration metric: took 9.394627ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.256997   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.377135   29617 request.go:632] Waited for 120.080186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377222   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.377244   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.377255   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.380444   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.576460   29617 request.go:632] Waited for 195.298391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576523   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576531   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.576540   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.576546   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.579942   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.580389   29617 pod_ready.go:93] pod "etcd-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.580410   29617 pod_ready.go:82] duration metric: took 323.407782ms for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.580426   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.776719   29617 request.go:632] Waited for 196.227093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776796   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776801   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.776812   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.776819   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.780183   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.977331   29617 request.go:632] Waited for 196.373167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977390   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.977408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.977414   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.980667   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.981324   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.981341   29617 pod_ready.go:82] duration metric: took 400.908426ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.981356   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.176801   29617 request.go:632] Waited for 195.389419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176872   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176878   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.176886   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.176893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.180626   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.376945   29617 request.go:632] Waited for 195.362412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377024   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377032   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.377039   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.377045   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.380705   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.381593   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.381610   29617 pod_ready.go:82] duration metric: took 400.248016ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.381621   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.576685   29617 request.go:632] Waited for 195.00587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576774   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576785   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.576796   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.576812   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.580220   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.776845   29617 request.go:632] Waited for 195.742935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776934   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776946   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.776957   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.776965   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.781975   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:34.782910   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.782934   29617 pod_ready.go:82] duration metric: took 401.305343ms for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.782947   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.976878   29617 request.go:632] Waited for 193.849735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976930   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976935   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.976942   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.976951   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.980959   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.176307   29617 request.go:632] Waited for 194.592291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176377   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176382   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.176391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.176396   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.180046   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.180744   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.180763   29617 pod_ready.go:82] duration metric: took 397.808243ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.180772   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.376823   29617 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376892   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376904   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.376914   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.376920   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.380896   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.577025   29617 request.go:632] Waited for 195.339459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577098   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577106   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.577113   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.577121   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.580479   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.581020   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.581044   29617 pod_ready.go:82] duration metric: took 400.264515ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.581060   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.777008   29617 request.go:632] Waited for 195.878722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777069   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777082   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.777104   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.777112   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.780597   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.976851   29617 request.go:632] Waited for 195.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976920   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976925   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.976934   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.976956   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.980563   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.981007   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.981026   29617 pod_ready.go:82] duration metric: took 399.955573ms for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.981036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.177077   29617 request.go:632] Waited for 195.967969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177157   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177162   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.177169   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.177174   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.181463   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:36.376692   29617 request.go:632] Waited for 194.268817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376745   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376750   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.376757   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.376762   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.379384   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.379856   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.379878   29617 pod_ready.go:82] duration metric: took 398.835564ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.379892   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.577313   29617 request.go:632] Waited for 197.342873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577431   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577448   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.577456   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.577460   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.580412   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.776616   29617 request.go:632] Waited for 195.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776706   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776717   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.776728   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.776737   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.779960   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:36.780383   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.780400   29617 pod_ready.go:82] duration metric: took 400.499984ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.780412   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.976358   29617 request.go:632] Waited for 195.870601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976432   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976449   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.976465   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.976472   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.979995   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.177111   29617 request.go:632] Waited for 196.357808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177162   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.177174   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.177179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.180267   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.181011   29617 pod_ready.go:93] pod "kube-proxy-cwzw4" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.181027   29617 pod_ready.go:82] duration metric: took 400.605186ms for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.181036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.377210   29617 request.go:632] Waited for 196.081343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377264   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377271   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.377281   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.377290   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.380963   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.577326   29617 request.go:632] Waited for 195.76133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577389   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.577404   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.577408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.580712   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.581178   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.581195   29617 pod_ready.go:82] duration metric: took 400.154079ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.581207   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.776648   29617 request.go:632] Waited for 195.355762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776752   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776766   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.776778   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.776782   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.779689   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:37.976673   29617 request.go:632] Waited for 196.375961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976747   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976758   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.976880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.976898   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.980426   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.981073   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.981096   29617 pod_ready.go:82] duration metric: took 399.882141ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.981108   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.177223   29617 request.go:632] Waited for 196.014293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177283   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177288   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.177296   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.177301   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.181281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.376306   29617 request.go:632] Waited for 194.28038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376394   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376403   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.376412   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.376419   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.379547   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.380029   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:38.380048   29617 pod_ready.go:82] duration metric: took 398.929633ms for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.380058   29617 pod_ready.go:39] duration metric: took 5.200277623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:38.380084   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:19:38.380134   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:19:38.400400   29617 api_server.go:72] duration metric: took 25.548169639s to wait for apiserver process to appear ...
	I1011 21:19:38.400421   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:19:38.400455   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:19:38.404896   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:19:38.404960   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:19:38.404973   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.404983   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.404989   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.405751   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:19:38.405814   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:19:38.405829   29617 api_server.go:131] duration metric: took 5.403218ms to wait for apiserver health ...
	I1011 21:19:38.405839   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:19:38.577234   29617 request.go:632] Waited for 171.320057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577302   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577307   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.577315   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.577319   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.583229   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.592399   29617 system_pods.go:59] 24 kube-system pods found
	I1011 21:19:38.592431   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.592436   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.592439   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.592442   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.592445   29617 system_pods.go:61] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.592448   29617 system_pods.go:61] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.592452   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.592455   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.592458   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.592461   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.592465   29617 system_pods.go:61] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.592468   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.592474   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.592477   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.592480   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.592484   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.592488   29617 system_pods.go:61] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.592493   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.592496   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.592499   29617 system_pods.go:61] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.592502   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.592511   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.592517   29617 system_pods.go:61] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.592521   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.592525   29617 system_pods.go:74] duration metric: took 186.682269ms to wait for pod list to return data ...
	I1011 21:19:38.592532   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:19:38.776788   29617 request.go:632] Waited for 184.17903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776850   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776857   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.776867   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.776874   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.780634   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.780764   29617 default_sa.go:45] found service account: "default"
	I1011 21:19:38.780782   29617 default_sa.go:55] duration metric: took 188.241369ms for default service account to be created ...
	I1011 21:19:38.780791   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:19:38.977229   29617 request.go:632] Waited for 196.374035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977314   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977326   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.977333   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.977339   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.983305   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.990701   29617 system_pods.go:86] 24 kube-system pods found
	I1011 21:19:38.990734   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.990743   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.990750   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.990756   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.990762   29617 system_pods.go:89] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.990769   29617 system_pods.go:89] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.990775   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.990782   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.990790   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.990800   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.990808   29617 system_pods.go:89] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.990818   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.990826   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.990835   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.990842   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.990849   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.990856   29617 system_pods.go:89] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.990866   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.990873   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.990880   29617 system_pods.go:89] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.990889   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.990896   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.990903   29617 system_pods.go:89] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.990910   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.990922   29617 system_pods.go:126] duration metric: took 210.12433ms to wait for k8s-apps to be running ...
	I1011 21:19:38.990936   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:19:38.991000   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:19:39.006368   29617 system_svc.go:56] duration metric: took 15.405995ms WaitForService to wait for kubelet
	I1011 21:19:39.006398   29617 kubeadm.go:582] duration metric: took 26.154169399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:19:39.006432   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:19:39.177139   29617 request.go:632] Waited for 170.58768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177204   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177210   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:39.177218   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:39.177226   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:39.180762   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:39.182158   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182186   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182210   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182214   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182219   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182222   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182225   29617 node_conditions.go:105] duration metric: took 175.788668ms to run NodePressure ...
	I1011 21:19:39.182235   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:19:39.182261   29617 start.go:255] writing updated cluster config ...
	I1011 21:19:39.182594   29617 ssh_runner.go:195] Run: rm -f paused
	I1011 21:19:39.238354   29617 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:19:39.241534   29617 out.go:177] * Done! kubectl is now configured to use "ha-610874" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:23:31 ha-610874 crio[662]: time="2024-10-11 21:23:31.994733273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681811994711789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35ea923f-f10a-48c2-8c3f-92734513c074 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:31 ha-610874 crio[662]: time="2024-10-11 21:23:31.995308036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32bf961c-3eab-476d-9077-9143e4afa8c5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:31 ha-610874 crio[662]: time="2024-10-11 21:23:31.995377681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32bf961c-3eab-476d-9077-9143e4afa8c5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:31 ha-610874 crio[662]: time="2024-10-11 21:23:31.995587668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32bf961c-3eab-476d-9077-9143e4afa8c5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.031881716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e5af777-ddf6-4ff3-9bf0-da833fad525b name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.031969580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e5af777-ddf6-4ff3-9bf0-da833fad525b name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.033092733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acf80eb2-afa7-430a-97a5-0950f58f39fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.034101082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681812033651437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acf80eb2-afa7-430a-97a5-0950f58f39fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.035009621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40008fd7-e7c1-4e02-ae0b-057bc6429d97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.035105690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40008fd7-e7c1-4e02-ae0b-057bc6429d97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.035876487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40008fd7-e7c1-4e02-ae0b-057bc6429d97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.075417730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67251cd0-978c-4cee-a8da-edd5ac5dbb4b name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.075511700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67251cd0-978c-4cee-a8da-edd5ac5dbb4b name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.076682192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e3aad37-0e6d-4b76-85e2-5915c6bc465a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.077135506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681812077083492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e3aad37-0e6d-4b76-85e2-5915c6bc465a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.077971012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=404e9237-ee2c-4ad8-951d-47d22fff651a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.078046772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=404e9237-ee2c-4ad8-951d-47d22fff651a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.078399668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=404e9237-ee2c-4ad8-951d-47d22fff651a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.118760841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7efbd325-e907-43a1-8ee6-56d603364d73 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.118832056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7efbd325-e907-43a1-8ee6-56d603364d73 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.121075046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f5c599a-cb57-4f15-be8b-d1bd23d97d2c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.125965115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681812125873547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f5c599a-cb57-4f15-be8b-d1bd23d97d2c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.128473699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bdb9067-b32f-43db-b06a-ef21d56cea76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.128531689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bdb9067-b32f-43db-b06a-ef21d56cea76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:32 ha-610874 crio[662]: time="2024-10-11 21:23:32.128759953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bdb9067-b32f-43db-b06a-ef21d56cea76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a12e9c8cc5fc5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3d6c8146ac279       busybox-7dff88458-wdkxg
	add7da026dcc4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8079f4949344c       coredns-7c65d6cfc9-xdhdb
	f6f7910716598       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bb1b1e2f66116       coredns-7c65d6cfc9-bhkxl
	01564ba5bc1e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   5b0253d201393       storage-provisioner
	9d5b2015aad60       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   bc055170688e1       kindnet-pd7rn
	4af1bc183cfbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   9bb0d73fd8a6d       kube-proxy-4tqhn
	7009deb3ff5ef       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   343b700a511ad       kube-vip-ha-610874
	1bb0907534c8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9a96e5f0cd28a       kube-controller-manager-ha-610874
	093fe14b91d96       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   089d2c0589273       kube-scheduler-ha-610874
	b6a994e3f4bd9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6fbc98773bd42       kube-apiserver-ha-610874
	1cf13112be94f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   65e184a932364       etcd-ha-610874
	
	
	==> coredns [add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6] <==
	[INFO] 10.244.1.2:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143766s
	[INFO] 10.244.1.2:38119 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142587s
	[INFO] 10.244.1.2:40246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.002793445s
	[INFO] 10.244.1.2:46273 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000207574s
	[INFO] 10.244.0.4:51515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133463s
	[INFO] 10.244.0.4:34555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773084s
	[INFO] 10.244.0.4:56190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010851s
	[INFO] 10.244.0.4:35324 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114943s
	[INFO] 10.244.0.4:37261 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075619s
	[INFO] 10.244.2.2:33936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100657s
	[INFO] 10.244.2.2:47182 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000246779s
	[INFO] 10.244.1.2:44485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167961s
	[INFO] 10.244.1.2:46483 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141019s
	[INFO] 10.244.1.2:55464 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121351s
	[INFO] 10.244.0.4:47194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117616s
	[INFO] 10.244.0.4:49523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148468s
	[INFO] 10.244.0.4:45932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127987s
	[INFO] 10.244.0.4:49317 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075167s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169352s
	[INFO] 10.244.2.2:33809 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014751s
	[INFO] 10.244.2.2:44485 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176967s
	[INFO] 10.244.1.2:48359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011299s
	[INFO] 10.244.0.4:56947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140437s
	[INFO] 10.244.0.4:57754 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075899s
	[INFO] 10.244.0.4:59528 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091718s
	
	
	==> coredns [f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb] <==
	[INFO] 127.0.0.1:48153 - 48750 "HINFO IN 7219889624523006915.8528053042981959638. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015325438s
	[INFO] 10.244.2.2:47536 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.017241259s
	[INFO] 10.244.2.2:38591 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013641236s
	[INFO] 10.244.1.2:49949 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001322762s
	[INFO] 10.244.1.2:43849 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00009337s
	[INFO] 10.244.0.4:40246 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000070768s
	[INFO] 10.244.0.4:45808 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00140468s
	[INFO] 10.244.2.2:36598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219913s
	[INFO] 10.244.2.2:59970 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164371s
	[INFO] 10.244.2.2:54785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130909s
	[INFO] 10.244.1.2:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791262s
	[INFO] 10.244.1.2:49139 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158826s
	[INFO] 10.244.1.2:59870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00130207s
	[INFO] 10.244.1.2:48112 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127361s
	[INFO] 10.244.0.4:37981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152222s
	[INFO] 10.244.0.4:40975 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001145115s
	[INFO] 10.244.0.4:46746 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060695s
	[INFO] 10.244.2.2:60221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111051s
	[INFO] 10.244.2.2:45949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000966s
	[INFO] 10.244.1.2:51845 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131185s
	[INFO] 10.244.2.2:49925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140614s
	[INFO] 10.244.1.2:40749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139491s
	[INFO] 10.244.1.2:40058 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192557s
	[INFO] 10.244.1.2:36253 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154213s
	[INFO] 10.244.0.4:54354 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127201s
	
	
	==> describe nodes <==
	Name:               ha-610874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:16:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    ha-610874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cfe54b8903d4e3899113202463cdd3d
	  System UUID:                0cfe54b8-903d-4e38-9911-3202463cdd3d
	  Boot ID:                    afa53331-2d72-4daf-aead-d3b59f60fb23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wdkxg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-7c65d6cfc9-bhkxl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 coredns-7c65d6cfc9-xdhdb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 etcd-ha-610874                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-pd7rn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m29s
	  kube-system                 kube-apiserver-ha-610874             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-610874    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-4tqhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-scheduler-ha-610874             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-vip-ha-610874                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m26s  kube-proxy       
	  Normal  Starting                 6m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s  kubelet          Node ha-610874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s  kubelet          Node ha-610874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s  kubelet          Node ha-610874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node ha-610874 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  RegisteredNode           4m14s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	
	
	Name:               ha-610874-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:17:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:20:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-610874-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5e48fde498443da85ce03c51747b961
	  System UUID:                e5e48fde-4984-43da-85ce-03c51747b961
	  Boot ID:                    bf2f6504-4406-4797-b6e1-dc754be8ce6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pwg8s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-610874-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m38s
	  kube-system                 kindnet-xs5m6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m40s
	  kube-system                 kube-apiserver-ha-610874-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-controller-manager-ha-610874-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-proxy-4bj7p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-scheduler-ha-610874-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-vip-ha-610874-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node ha-610874-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeNotReady             114s                   node-controller  Node ha-610874-m02 status is now: NodeNotReady
	
	
	Name:               ha-610874-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-610874-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1063a3d54d5d40c88a61db94380d3423
	  System UUID:                1063a3d5-4d5d-40c8-8a61-db94380d3423
	  Boot ID:                    ced9dc07-ccd1-4190-aae0-50f9a8bdae06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4sstr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-610874-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kindnet-2c774                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m23s
	  kube-system                 kube-apiserver-ha-610874-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-ha-610874-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-cwzw4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-ha-610874-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-vip-ha-610874-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m23s                  cidrAllocator    Node ha-610874-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node ha-610874-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	
	
	Name:               ha-610874-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_20_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-610874-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75d61525a70843b49a5efd4786a05869
	  System UUID:                75d61525-a708-43b4-9a5e-fd4786a05869
	  Boot ID:                    172ace10-e670-4373-a755-bb93871c28da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7dn76       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-vrd24    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-610874-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m13s                  cidrAllocator    Node ha-610874-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-610874-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct11 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.855992] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543327] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581790] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.580104] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056339] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.193419] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.137869] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293941] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.956728] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.562630] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.064485] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.508464] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.090437] kauditd_printk_skb: 79 callbacks suppressed
	[Oct11 21:17] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.436722] kauditd_printk_skb: 29 callbacks suppressed
	[ +46.213407] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a] <==
	{"level":"warn","ts":"2024-10-11T21:23:32.381310Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.383005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.387134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.398968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.406302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.412605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.415762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.419764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.427273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.428859Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"75f7d6a6d827e320","rtt":"1.615137ms","error":"dial tcp 192.168.39.11:2380: i/o timeout"}
	{"level":"warn","ts":"2024-10-11T21:23:32.429335Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"75f7d6a6d827e320","rtt":"9.863592ms","error":"dial tcp 192.168.39.11:2380: i/o timeout"}
	{"level":"warn","ts":"2024-10-11T21:23:32.433666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.440182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.445319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.448772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.456368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.463037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.469269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.473048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.476377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.480339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.480999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.489678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.495980Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:32.526252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:23:32 up 7 min,  0 users,  load average: 0.33, 0.38, 0.20
	Linux ha-610874 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952] <==
	I1011 21:22:53.015599       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:03.016986       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:03.017143       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:03.017517       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:03.017599       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:03.017887       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:03.017926       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:03.018170       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:03.018292       1 main.go:300] handling current node
	I1011 21:23:13.008357       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:13.008403       1 main.go:300] handling current node
	I1011 21:23:13.008468       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:13.008474       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:13.008844       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:13.008922       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:13.009419       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:13.009448       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:23.017976       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:23.018143       1 main.go:300] handling current node
	I1011 21:23:23.018234       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:23.018259       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:23.018517       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:23.018551       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:23.018673       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:23.018695       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948] <==
	I1011 21:17:03.544827       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1011 21:17:03.633951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1011 21:17:53.070315       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.070829       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 84.644µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:17:53.072106       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.073324       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.074623       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.578549ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:10.074019       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.449µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:19:10.074013       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9bd8f8e8-8e91-4067-a12f-1ea2d8bd41c6"
	E1011 21:19:10.074068       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.809µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:45.881753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47690: use of closed network connection
	E1011 21:19:46.062184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47710: use of closed network connection
	E1011 21:19:46.253652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47728: use of closed network connection
	E1011 21:19:46.438494       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47750: use of closed network connection
	E1011 21:19:46.637537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47770: use of closed network connection
	E1011 21:19:46.815140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45184: use of closed network connection
	E1011 21:19:47.002661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45216: use of closed network connection
	E1011 21:19:47.179398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45236: use of closed network connection
	E1011 21:19:47.346528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45250: use of closed network connection
	E1011 21:19:47.638405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45264: use of closed network connection
	E1011 21:19:47.808669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45288: use of closed network connection
	E1011 21:19:47.977304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45304: use of closed network connection
	E1011 21:19:48.152762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45326: use of closed network connection
	E1011 21:19:48.324710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45346: use of closed network connection
	E1011 21:19:48.491718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45354: use of closed network connection
	
	
	==> kube-controller-manager [1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865] <==
	I1011 21:20:18.968008       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-610874-m04" podCIDRs=["10.244.3.0/24"]
	I1011 21:20:18.968119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.968257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.984966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:19.260924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.121280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.397093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.070457       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-610874-m04"
	I1011 21:20:23.072402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.132945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.420908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.568334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:29.120840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562762       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:20:39.580852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:40.377354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:49.215156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:21:38.097956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:21:38.098503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.132013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.234358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.800775ms"
	I1011 21:21:38.234458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.4µs"
	I1011 21:21:38.464262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:43.340055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	
	
	==> kube-proxy [4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 21:17:05.854510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 21:17:05.879022       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E1011 21:17:05.879501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 21:17:05.914134       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 21:17:05.914253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 21:17:05.914286       1 server_linux.go:169] "Using iptables Proxier"
	I1011 21:17:05.916891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 21:17:05.917757       1 server.go:483] "Version info" version="v1.31.1"
	I1011 21:17:05.917796       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:17:05.920479       1 config.go:199] "Starting service config controller"
	I1011 21:17:05.920740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 21:17:05.920939       1 config.go:105] "Starting endpoint slice config controller"
	I1011 21:17:05.920964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 21:17:05.921847       1 config.go:328] "Starting node config controller"
	I1011 21:17:05.921877       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 21:17:06.021605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 21:17:06.021672       1 shared_informer.go:320] Caches are synced for service config
	I1011 21:17:06.021955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94] <==
	W1011 21:16:56.914961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:56.914997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:56.955611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 21:16:56.955698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.100673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 21:16:57.100737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.117148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.117326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.263820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 21:16:57.264353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.296892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 21:16:57.297090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.359800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.360057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.555273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 21:16:57.555402       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 21:17:00.497419       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1011 21:20:19.054608       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7dn76" node="ha-610874-m04"
	E1011 21:20:19.055446       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-7dn76"
	E1011 21:20:19.188470       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dz2h8" node="ha-610874-m04"
	E1011 21:20:19.188552       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-dz2h8"
	E1011 21:20:19.193309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	E1011 21:20:19.195518       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f3a80da1-771c-458b-85ce-bff2b7759d1e(kube-system/kube-proxy-ht4ns) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ht4ns"
	E1011 21:20:19.195828       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" pod="kube-system/kube-proxy-ht4ns"
	I1011 21:20:19.196036       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	
	
	==> kubelet <==
	Oct 11 21:21:59 ha-610874 kubelet[1312]: E1011 21:21:59.036447    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681719036062418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:21:59 ha-610874 kubelet[1312]: E1011 21:21:59.036488    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681719036062418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038549    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038630    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040811    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040841    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.042974    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.043019    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044063    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044089    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045695    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045734    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:58 ha-610874 kubelet[1312]: E1011 21:22:58.943175    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.046933    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.047037    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049554    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049631    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.053671    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.054088    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:29 ha-610874 kubelet[1312]: E1011 21:23:29.057472    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681809056986667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:29 ha-610874 kubelet[1312]: E1011 21:23:29.057867    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681809056986667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-610874 -n ha-610874
helpers_test.go:261: (dbg) Run:  kubectl --context ha-610874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.222841013s)
ha_test.go:309: expected profile "ha-610874" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-610874\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-610874\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-610874\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.10\",\"Port\":8443,\"Kubernet
esVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.11\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.222\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.87\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":
false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"M
ountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-610874 -n ha-610874
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 logs -n 25: (1.3810767s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m03_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m04 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp testdata/cp-test.txt                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m03 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-610874 node stop m02 -v=7                                                     | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-610874 node start m02 -v=7                                                    | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:16:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:16:16.315983   29617 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:16:16.316246   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316256   29617 out.go:358] Setting ErrFile to fd 2...
	I1011 21:16:16.316260   29617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:16:16.316440   29617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:16:16.316986   29617 out.go:352] Setting JSON to false
	I1011 21:16:16.317794   29617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3521,"bootTime":1728677855,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:16:16.317891   29617 start.go:139] virtualization: kvm guest
	I1011 21:16:16.320541   29617 out.go:177] * [ha-610874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:16:16.321962   29617 notify.go:220] Checking for updates...
	I1011 21:16:16.321994   29617 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:16:16.323197   29617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:16:16.324431   29617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:16:16.325803   29617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.326998   29617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:16:16.328308   29617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:16:16.329813   29617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:16:16.364781   29617 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 21:16:16.366005   29617 start.go:297] selected driver: kvm2
	I1011 21:16:16.366018   29617 start.go:901] validating driver "kvm2" against <nil>
	I1011 21:16:16.366031   29617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:16:16.366752   29617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.366844   29617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:16:16.382125   29617 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:16:16.382207   29617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 21:16:16.382499   29617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:16:16.382537   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:16.382594   29617 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1011 21:16:16.382605   29617 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 21:16:16.382687   29617 start.go:340] cluster config:
	{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:16.382807   29617 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:16:16.384631   29617 out.go:177] * Starting "ha-610874" primary control-plane node in "ha-610874" cluster
	I1011 21:16:16.385929   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:16.385976   29617 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:16:16.385989   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:16:16.386070   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:16:16.386083   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:16:16.386381   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:16.386407   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json: {Name:mk126d2587705783f49cefd5532c6478d010ac07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:16.386555   29617 start.go:360] acquireMachinesLock for ha-610874: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:16:16.386593   29617 start.go:364] duration metric: took 23.105µs to acquireMachinesLock for "ha-610874"
	I1011 21:16:16.386631   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:16:16.386695   29617 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 21:16:16.388125   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:16:16.388266   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:16:16.388308   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:16:16.402198   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I1011 21:16:16.402701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:16:16.403193   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:16:16.403238   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:16:16.403629   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:16:16.403831   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:16.403987   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:16.404130   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:16:16.404153   29617 client.go:168] LocalClient.Create starting
	I1011 21:16:16.404179   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:16:16.404207   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404220   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404273   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:16:16.404296   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:16:16.404309   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:16:16.404323   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:16:16.404331   29617 main.go:141] libmachine: (ha-610874) Calling .PreCreateCheck
	I1011 21:16:16.404634   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:16.404967   29617 main.go:141] libmachine: Creating machine...
	I1011 21:16:16.404978   29617 main.go:141] libmachine: (ha-610874) Calling .Create
	I1011 21:16:16.405091   29617 main.go:141] libmachine: (ha-610874) Creating KVM machine...
	I1011 21:16:16.406548   29617 main.go:141] libmachine: (ha-610874) DBG | found existing default KVM network
	I1011 21:16:16.407330   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.407180   29640 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1011 21:16:16.407350   29617 main.go:141] libmachine: (ha-610874) DBG | created network xml: 
	I1011 21:16:16.407362   29617 main.go:141] libmachine: (ha-610874) DBG | <network>
	I1011 21:16:16.407369   29617 main.go:141] libmachine: (ha-610874) DBG |   <name>mk-ha-610874</name>
	I1011 21:16:16.407378   29617 main.go:141] libmachine: (ha-610874) DBG |   <dns enable='no'/>
	I1011 21:16:16.407386   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407396   29617 main.go:141] libmachine: (ha-610874) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1011 21:16:16.407401   29617 main.go:141] libmachine: (ha-610874) DBG |     <dhcp>
	I1011 21:16:16.407430   29617 main.go:141] libmachine: (ha-610874) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1011 21:16:16.407460   29617 main.go:141] libmachine: (ha-610874) DBG |     </dhcp>
	I1011 21:16:16.407476   29617 main.go:141] libmachine: (ha-610874) DBG |   </ip>
	I1011 21:16:16.407485   29617 main.go:141] libmachine: (ha-610874) DBG |   
	I1011 21:16:16.407492   29617 main.go:141] libmachine: (ha-610874) DBG | </network>
	I1011 21:16:16.407498   29617 main.go:141] libmachine: (ha-610874) DBG | 
	I1011 21:16:16.412623   29617 main.go:141] libmachine: (ha-610874) DBG | trying to create private KVM network mk-ha-610874 192.168.39.0/24...
	I1011 21:16:16.475097   29617 main.go:141] libmachine: (ha-610874) DBG | private KVM network mk-ha-610874 192.168.39.0/24 created
	I1011 21:16:16.475123   29617 main.go:141] libmachine: (ha-610874) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.475147   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.475097   29640 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.475159   29617 main.go:141] libmachine: (ha-610874) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:16:16.475241   29617 main.go:141] libmachine: (ha-610874) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:16:16.729125   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.729005   29640 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa...
	I1011 21:16:16.910019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.909910   29640 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk...
	I1011 21:16:16.910047   29617 main.go:141] libmachine: (ha-610874) DBG | Writing magic tar header
	I1011 21:16:16.910056   29617 main.go:141] libmachine: (ha-610874) DBG | Writing SSH key tar header
	I1011 21:16:16.910063   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:16.910020   29640 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 ...
	I1011 21:16:16.910136   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874
	I1011 21:16:16.910176   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874 (perms=drwx------)
	I1011 21:16:16.910191   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:16:16.910200   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:16:16.910207   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:16:16.910225   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:16:16.910242   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:16:16.910260   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:16:16.910277   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:16:16.910286   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:16:16.910293   29617 main.go:141] libmachine: (ha-610874) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:16:16.910306   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:16.910328   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:16:16.910345   29617 main.go:141] libmachine: (ha-610874) DBG | Checking permissions on dir: /home
	I1011 21:16:16.910356   29617 main.go:141] libmachine: (ha-610874) DBG | Skipping /home - not owner
	I1011 21:16:16.911372   29617 main.go:141] libmachine: (ha-610874) define libvirt domain using xml: 
	I1011 21:16:16.911391   29617 main.go:141] libmachine: (ha-610874) <domain type='kvm'>
	I1011 21:16:16.911398   29617 main.go:141] libmachine: (ha-610874)   <name>ha-610874</name>
	I1011 21:16:16.911402   29617 main.go:141] libmachine: (ha-610874)   <memory unit='MiB'>2200</memory>
	I1011 21:16:16.911407   29617 main.go:141] libmachine: (ha-610874)   <vcpu>2</vcpu>
	I1011 21:16:16.911412   29617 main.go:141] libmachine: (ha-610874)   <features>
	I1011 21:16:16.911418   29617 main.go:141] libmachine: (ha-610874)     <acpi/>
	I1011 21:16:16.911425   29617 main.go:141] libmachine: (ha-610874)     <apic/>
	I1011 21:16:16.911430   29617 main.go:141] libmachine: (ha-610874)     <pae/>
	I1011 21:16:16.911442   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911451   29617 main.go:141] libmachine: (ha-610874)   </features>
	I1011 21:16:16.911459   29617 main.go:141] libmachine: (ha-610874)   <cpu mode='host-passthrough'>
	I1011 21:16:16.911467   29617 main.go:141] libmachine: (ha-610874)   
	I1011 21:16:16.911473   29617 main.go:141] libmachine: (ha-610874)   </cpu>
	I1011 21:16:16.911479   29617 main.go:141] libmachine: (ha-610874)   <os>
	I1011 21:16:16.911484   29617 main.go:141] libmachine: (ha-610874)     <type>hvm</type>
	I1011 21:16:16.911489   29617 main.go:141] libmachine: (ha-610874)     <boot dev='cdrom'/>
	I1011 21:16:16.911492   29617 main.go:141] libmachine: (ha-610874)     <boot dev='hd'/>
	I1011 21:16:16.911498   29617 main.go:141] libmachine: (ha-610874)     <bootmenu enable='no'/>
	I1011 21:16:16.911504   29617 main.go:141] libmachine: (ha-610874)   </os>
	I1011 21:16:16.911510   29617 main.go:141] libmachine: (ha-610874)   <devices>
	I1011 21:16:16.911516   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='cdrom'>
	I1011 21:16:16.911532   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/boot2docker.iso'/>
	I1011 21:16:16.911547   29617 main.go:141] libmachine: (ha-610874)       <target dev='hdc' bus='scsi'/>
	I1011 21:16:16.911568   29617 main.go:141] libmachine: (ha-610874)       <readonly/>
	I1011 21:16:16.911586   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911596   29617 main.go:141] libmachine: (ha-610874)     <disk type='file' device='disk'>
	I1011 21:16:16.911605   29617 main.go:141] libmachine: (ha-610874)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:16:16.911637   29617 main.go:141] libmachine: (ha-610874)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/ha-610874.rawdisk'/>
	I1011 21:16:16.911655   29617 main.go:141] libmachine: (ha-610874)       <target dev='hda' bus='virtio'/>
	I1011 21:16:16.911674   29617 main.go:141] libmachine: (ha-610874)     </disk>
	I1011 21:16:16.911692   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911700   29617 main.go:141] libmachine: (ha-610874)       <source network='mk-ha-610874'/>
	I1011 21:16:16.911705   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911709   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911713   29617 main.go:141] libmachine: (ha-610874)     <interface type='network'>
	I1011 21:16:16.911719   29617 main.go:141] libmachine: (ha-610874)       <source network='default'/>
	I1011 21:16:16.911726   29617 main.go:141] libmachine: (ha-610874)       <model type='virtio'/>
	I1011 21:16:16.911730   29617 main.go:141] libmachine: (ha-610874)     </interface>
	I1011 21:16:16.911736   29617 main.go:141] libmachine: (ha-610874)     <serial type='pty'>
	I1011 21:16:16.911741   29617 main.go:141] libmachine: (ha-610874)       <target port='0'/>
	I1011 21:16:16.911745   29617 main.go:141] libmachine: (ha-610874)     </serial>
	I1011 21:16:16.911751   29617 main.go:141] libmachine: (ha-610874)     <console type='pty'>
	I1011 21:16:16.911757   29617 main.go:141] libmachine: (ha-610874)       <target type='serial' port='0'/>
	I1011 21:16:16.911762   29617 main.go:141] libmachine: (ha-610874)     </console>
	I1011 21:16:16.911771   29617 main.go:141] libmachine: (ha-610874)     <rng model='virtio'>
	I1011 21:16:16.911795   29617 main.go:141] libmachine: (ha-610874)       <backend model='random'>/dev/random</backend>
	I1011 21:16:16.911810   29617 main.go:141] libmachine: (ha-610874)     </rng>
	I1011 21:16:16.911818   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911827   29617 main.go:141] libmachine: (ha-610874)     
	I1011 21:16:16.911835   29617 main.go:141] libmachine: (ha-610874)   </devices>
	I1011 21:16:16.911844   29617 main.go:141] libmachine: (ha-610874) </domain>
	I1011 21:16:16.911853   29617 main.go:141] libmachine: (ha-610874) 
	I1011 21:16:16.916111   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:3e:bc:a1 in network default
	I1011 21:16:16.916699   29617 main.go:141] libmachine: (ha-610874) Ensuring networks are active...
	I1011 21:16:16.916720   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:16.917266   29617 main.go:141] libmachine: (ha-610874) Ensuring network default is active
	I1011 21:16:16.917528   29617 main.go:141] libmachine: (ha-610874) Ensuring network mk-ha-610874 is active
	I1011 21:16:16.918196   29617 main.go:141] libmachine: (ha-610874) Getting domain xml...
	I1011 21:16:16.918917   29617 main.go:141] libmachine: (ha-610874) Creating domain...
	I1011 21:16:18.090043   29617 main.go:141] libmachine: (ha-610874) Waiting to get IP...
	I1011 21:16:18.090745   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.091141   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.091169   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.091121   29640 retry.go:31] will retry after 201.066044ms: waiting for machine to come up
	I1011 21:16:18.293473   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.293939   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.293961   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.293905   29640 retry.go:31] will retry after 378.868503ms: waiting for machine to come up
	I1011 21:16:18.674665   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:18.675080   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:18.675111   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:18.675034   29640 retry.go:31] will retry after 485.059913ms: waiting for machine to come up
	I1011 21:16:19.161402   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.161817   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.161841   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.161779   29640 retry.go:31] will retry after 597.34397ms: waiting for machine to come up
	I1011 21:16:19.760468   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:19.761020   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:19.761049   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:19.760968   29640 retry.go:31] will retry after 563.860814ms: waiting for machine to come up
	I1011 21:16:20.326631   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:20.326999   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:20.327019   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:20.326975   29640 retry.go:31] will retry after 723.522472ms: waiting for machine to come up
	I1011 21:16:21.051775   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:21.052216   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:21.052252   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:21.052167   29640 retry.go:31] will retry after 1.08960891s: waiting for machine to come up
	I1011 21:16:22.142962   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:22.143401   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:22.143426   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:22.143368   29640 retry.go:31] will retry after 897.228253ms: waiting for machine to come up
	I1011 21:16:23.042418   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:23.042804   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:23.042830   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:23.042766   29640 retry.go:31] will retry after 1.598924345s: waiting for machine to come up
	I1011 21:16:24.643409   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:24.643801   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:24.643824   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:24.643752   29640 retry.go:31] will retry after 2.213754576s: waiting for machine to come up
	I1011 21:16:26.858883   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:26.859262   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:26.859288   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:26.859206   29640 retry.go:31] will retry after 2.657896821s: waiting for machine to come up
	I1011 21:16:29.518223   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:29.518660   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:29.518685   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:29.518604   29640 retry.go:31] will retry after 3.090933093s: waiting for machine to come up
	I1011 21:16:32.611083   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:32.611504   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find current IP address of domain ha-610874 in network mk-ha-610874
	I1011 21:16:32.611526   29617 main.go:141] libmachine: (ha-610874) DBG | I1011 21:16:32.611439   29640 retry.go:31] will retry after 4.256728144s: waiting for machine to come up
	I1011 21:16:36.869470   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.869869   29617 main.go:141] libmachine: (ha-610874) Found IP for machine: 192.168.39.10
	I1011 21:16:36.869889   29617 main.go:141] libmachine: (ha-610874) Reserving static IP address...
	I1011 21:16:36.869901   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has current primary IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.870189   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "ha-610874", mac: "52:54:00:5f:c7:da", ip: "192.168.39.10"} in network mk-ha-610874
	I1011 21:16:36.939387   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:36.939416   29617 main.go:141] libmachine: (ha-610874) Reserved static IP address: 192.168.39.10
	I1011 21:16:36.939452   29617 main.go:141] libmachine: (ha-610874) Waiting for SSH to be available...
	I1011 21:16:36.941715   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:36.941968   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874
	I1011 21:16:36.941981   29617 main.go:141] libmachine: (ha-610874) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:5f:c7:da
	I1011 21:16:36.942096   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:36.942140   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:36.942184   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:36.942200   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:36.942220   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:36.945904   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:16:36.945918   29617 main.go:141] libmachine: (ha-610874) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:16:36.945924   29617 main.go:141] libmachine: (ha-610874) DBG | command : exit 0
	I1011 21:16:36.945937   29617 main.go:141] libmachine: (ha-610874) DBG | err     : exit status 255
	I1011 21:16:36.945943   29617 main.go:141] libmachine: (ha-610874) DBG | output  : 
	I1011 21:16:39.948099   29617 main.go:141] libmachine: (ha-610874) DBG | Getting to WaitForSSH function...
	I1011 21:16:39.950401   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950756   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:39.950783   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:39.950892   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH client type: external
	I1011 21:16:39.950914   29617 main.go:141] libmachine: (ha-610874) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa (-rw-------)
	I1011 21:16:39.950953   29617 main.go:141] libmachine: (ha-610874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:16:39.950970   29617 main.go:141] libmachine: (ha-610874) DBG | About to run SSH command:
	I1011 21:16:39.950994   29617 main.go:141] libmachine: (ha-610874) DBG | exit 0
	I1011 21:16:40.078944   29617 main.go:141] libmachine: (ha-610874) DBG | SSH cmd err, output: <nil>: 
	I1011 21:16:40.079215   29617 main.go:141] libmachine: (ha-610874) KVM machine creation complete!
	I1011 21:16:40.079553   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:40.080090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080284   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:40.080465   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:16:40.080487   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:16:40.081981   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:16:40.081998   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:16:40.082006   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:16:40.082015   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.084298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084628   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.084651   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.084818   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.084959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085094   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.085224   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.085388   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.085639   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.085653   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:16:40.198146   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.198167   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:16:40.198175   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.200910   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201288   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.201309   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.201507   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.201664   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.201836   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.202076   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.202254   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.202419   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.202429   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:16:40.320067   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:16:40.320126   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:16:40.320134   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:16:40.320143   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320383   29617 buildroot.go:166] provisioning hostname "ha-610874"
	I1011 21:16:40.320406   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.320566   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.322841   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323123   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.323151   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.323298   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.323462   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323604   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.323710   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.323847   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.324007   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.324018   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874 && echo "ha-610874" | sudo tee /etc/hostname
	I1011 21:16:40.453038   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:16:40.453062   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.455945   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456318   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.456341   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.456518   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.456721   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456849   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.456959   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.457152   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.457380   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.457403   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:16:40.579667   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:16:40.579694   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:16:40.579712   29617 buildroot.go:174] setting up certificates
	I1011 21:16:40.579722   29617 provision.go:84] configureAuth start
	I1011 21:16:40.579730   29617 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:16:40.579972   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:40.582609   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.582944   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.582970   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.583046   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.585314   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585630   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.585652   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.585815   29617 provision.go:143] copyHostCerts
	I1011 21:16:40.585854   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585886   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:16:40.585905   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:16:40.585976   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:16:40.586075   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586099   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:16:40.586109   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:16:40.586148   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:16:40.586259   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586280   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:16:40.586286   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:16:40.586312   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:16:40.586375   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874 san=[127.0.0.1 192.168.39.10 ha-610874 localhost minikube]
	I1011 21:16:40.739496   29617 provision.go:177] copyRemoteCerts
	I1011 21:16:40.739549   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:16:40.739572   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.742211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742512   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.742540   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.742690   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.742858   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.743050   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.743333   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:40.830053   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:16:40.830129   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:16:40.854808   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:16:40.854871   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:16:40.878779   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:16:40.878844   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1011 21:16:40.903681   29617 provision.go:87] duration metric: took 323.94786ms to configureAuth
	I1011 21:16:40.903706   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:16:40.903876   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:16:40.903945   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:40.906420   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906781   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:40.906802   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:40.906980   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:40.907177   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907312   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:40.907417   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:40.907537   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:40.907709   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:40.907729   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:16:41.149826   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:16:41.149854   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:16:41.149864   29617 main.go:141] libmachine: (ha-610874) Calling .GetURL
	I1011 21:16:41.151110   29617 main.go:141] libmachine: (ha-610874) DBG | Using libvirt version 6000000
	I1011 21:16:41.153298   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153626   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.153645   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.153813   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:16:41.153832   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:16:41.153840   29617 client.go:171] duration metric: took 24.749677896s to LocalClient.Create
	I1011 21:16:41.153864   29617 start.go:167] duration metric: took 24.749734503s to libmachine.API.Create "ha-610874"
	I1011 21:16:41.153877   29617 start.go:293] postStartSetup for "ha-610874" (driver="kvm2")
	I1011 21:16:41.153888   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:16:41.153907   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.154134   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:16:41.154156   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.156353   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156731   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.156764   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.156902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.157060   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.157197   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.157377   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.245691   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:16:41.249882   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:16:41.249905   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:16:41.249959   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:16:41.250032   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:16:41.250041   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:16:41.250126   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:16:41.259595   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:41.283193   29617 start.go:296] duration metric: took 129.282074ms for postStartSetup
	I1011 21:16:41.283237   29617 main.go:141] libmachine: (ha-610874) Calling .GetConfigRaw
	I1011 21:16:41.283845   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.286641   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.286965   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.286993   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.287545   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:16:41.287766   29617 start.go:128] duration metric: took 24.901059572s to createHost
	I1011 21:16:41.287798   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.290002   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290466   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.290494   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.290571   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.290756   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.290937   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.291088   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.291234   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:16:41.291438   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:16:41.291450   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:16:41.403429   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681401.368525171
	
	I1011 21:16:41.403454   29617 fix.go:216] guest clock: 1728681401.368525171
	I1011 21:16:41.403464   29617 fix.go:229] Guest: 2024-10-11 21:16:41.368525171 +0000 UTC Remote: 2024-10-11 21:16:41.287784391 +0000 UTC m=+25.009627787 (delta=80.74078ms)
	I1011 21:16:41.403482   29617 fix.go:200] guest clock delta is within tolerance: 80.74078ms
	I1011 21:16:41.403487   29617 start.go:83] releasing machines lock for "ha-610874", held for 25.016883267s
	I1011 21:16:41.403504   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.403754   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:41.406243   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406536   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.406580   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.406719   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407201   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407373   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:16:41.407483   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:16:41.407533   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.407566   29617 ssh_runner.go:195] Run: cat /version.json
	I1011 21:16:41.407594   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:16:41.409924   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410186   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410211   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410232   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410307   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.410474   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.410626   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.410667   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:41.410689   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:41.410822   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.410885   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:16:41.411000   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:16:41.411159   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:16:41.411313   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:16:41.492040   29617 ssh_runner.go:195] Run: systemctl --version
	I1011 21:16:41.526227   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:16:41.684068   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:16:41.690188   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:16:41.690243   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:16:41.709475   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:16:41.709500   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:16:41.709563   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:16:41.725364   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:16:41.739326   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:16:41.739404   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:16:41.753640   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:16:41.767723   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:16:41.878060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:16:42.036051   29617 docker.go:233] disabling docker service ...
	I1011 21:16:42.036136   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:16:42.051987   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:16:42.065946   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:16:42.197199   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:16:42.333061   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:16:42.346878   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:16:42.365538   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:16:42.365592   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.375884   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:16:42.375943   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.386250   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.396765   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.407109   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:16:42.417549   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.427975   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.446147   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:16:42.456868   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:16:42.466165   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:16:42.466232   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:16:42.479799   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:16:42.489557   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:42.623905   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:16:42.716796   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:16:42.716871   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:16:42.721858   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:16:42.721918   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:16:42.725704   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:16:42.764981   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:16:42.765051   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.793072   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:16:42.822676   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:16:42.824024   29617 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:16:42.826801   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827112   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:16:42.827137   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:16:42.827350   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:16:42.831498   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:42.845346   29617 kubeadm.go:883] updating cluster {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:16:42.845519   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:16:42.845589   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:42.883957   29617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 21:16:42.884036   29617 ssh_runner.go:195] Run: which lz4
	I1011 21:16:42.888030   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1011 21:16:42.888109   29617 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 21:16:42.892241   29617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 21:16:42.892274   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 21:16:44.230363   29617 crio.go:462] duration metric: took 1.342272134s to copy over tarball
	I1011 21:16:44.230455   29617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 21:16:46.214291   29617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.983794178s)
	I1011 21:16:46.214315   29617 crio.go:469] duration metric: took 1.983922074s to extract the tarball
	I1011 21:16:46.214323   29617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 21:16:46.250833   29617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:16:46.298082   29617 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:16:46.298105   29617 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:16:46.298113   29617 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1011 21:16:46.298286   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:16:46.298384   29617 ssh_runner.go:195] Run: crio config
	I1011 21:16:46.343467   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:46.343493   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:46.343504   29617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:16:46.343528   29617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-610874 NodeName:ha-610874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:16:46.343703   29617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-610874"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:16:46.343730   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:16:46.343782   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:16:46.359672   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:16:46.359783   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:16:46.359850   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:16:46.370362   29617 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:16:46.370421   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1011 21:16:46.380573   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1011 21:16:46.396912   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:16:46.413759   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1011 21:16:46.430823   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1011 21:16:46.447531   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:16:46.451423   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:16:46.463809   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:16:46.584169   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:16:46.602286   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.10
	I1011 21:16:46.602304   29617 certs.go:194] generating shared ca certs ...
	I1011 21:16:46.602322   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.602467   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:16:46.602520   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:16:46.602533   29617 certs.go:256] generating profile certs ...
	I1011 21:16:46.602592   29617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:16:46.602638   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt with IP's: []
	I1011 21:16:46.782362   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt ...
	I1011 21:16:46.782395   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt: {Name:mk3593f4e91ffc0372a05bdad3e927ec316a91aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782596   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key ...
	I1011 21:16:46.782611   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key: {Name:mk9677876d62491747fdfd0e3f8d4776645d1f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:46.782738   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7
	I1011 21:16:46.782756   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.254]
	I1011 21:16:47.380528   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 ...
	I1011 21:16:47.380560   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7: {Name:mk19e9d91179b46f9b03d4d9246179f41c3327ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380745   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 ...
	I1011 21:16:47.380776   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7: {Name:mk7fedd6c046987d5af851e2eed75ec367a33eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.380872   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:16:47.380985   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.588e14d7 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:16:47.381067   29617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:16:47.381087   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt with IP's: []
	I1011 21:16:47.453906   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt ...
	I1011 21:16:47.453937   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt: {Name:mka90ed4c47ce0265f1b9da519124bd4fc73bbae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454114   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key ...
	I1011 21:16:47.454128   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key: {Name:mk47103fb5abe47f635456ba2a4ed9a69f678b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:16:47.454230   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:16:47.454250   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:16:47.454266   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:16:47.454284   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:16:47.454303   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:16:47.454319   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:16:47.454335   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:16:47.454354   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:16:47.454417   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:16:47.454461   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:16:47.454473   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:16:47.454508   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:16:47.454543   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:16:47.454573   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:16:47.454648   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:16:47.454696   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.454719   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.454738   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.455273   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:16:47.481574   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:16:47.514683   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:16:47.538141   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:16:47.561021   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 21:16:47.585590   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:16:47.608816   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:16:47.632949   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:16:47.656849   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:16:47.680043   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:16:47.703417   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:16:47.726027   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:16:47.747378   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:16:47.754019   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:16:47.765407   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770565   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.770631   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:16:47.776851   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:16:47.788126   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:16:47.799052   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803877   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.803931   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:16:47.810054   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:16:47.821548   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:16:47.832817   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837775   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.837829   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:16:47.843943   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:16:47.855398   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:16:47.859877   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:16:47.859928   29617 kubeadm.go:392] StartCluster: {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:16:47.860006   29617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:16:47.860081   29617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:16:47.903170   29617 cri.go:89] found id: ""
	I1011 21:16:47.903248   29617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 21:16:47.914400   29617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 21:16:47.924721   29617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 21:16:47.935673   29617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 21:16:47.935695   29617 kubeadm.go:157] found existing configuration files:
	
	I1011 21:16:47.935740   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 21:16:47.945454   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 21:16:47.945524   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 21:16:47.955440   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 21:16:47.964875   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 21:16:47.964944   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 21:16:47.974788   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.984258   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 21:16:47.984307   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 21:16:47.993726   29617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 21:16:48.002584   29617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 21:16:48.002650   29617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 21:16:48.012268   29617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 21:16:48.121155   29617 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 21:16:48.121351   29617 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 21:16:48.250203   29617 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 21:16:48.250314   29617 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 21:16:48.250452   29617 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 21:16:48.261245   29617 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 21:16:48.410718   29617 out.go:235]   - Generating certificates and keys ...
	I1011 21:16:48.410844   29617 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 21:16:48.410931   29617 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 21:16:48.542325   29617 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 21:16:48.608543   29617 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 21:16:48.797753   29617 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 21:16:48.873089   29617 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 21:16:49.070716   29617 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 21:16:49.071155   29617 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.372270   29617 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 21:16:49.372512   29617 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-610874 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I1011 21:16:49.423801   29617 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 21:16:49.655483   29617 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 21:16:49.724172   29617 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 21:16:49.724487   29617 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 21:16:50.017890   29617 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 21:16:50.285355   29617 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 21:16:50.392641   29617 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 21:16:50.748011   29617 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 21:16:50.984708   29617 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 21:16:50.985344   29617 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 21:16:50.988659   29617 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 21:16:50.990557   29617 out.go:235]   - Booting up control plane ...
	I1011 21:16:50.990675   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 21:16:50.990768   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 21:16:50.992112   29617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 21:16:51.010698   29617 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 21:16:51.019483   29617 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 21:16:51.019560   29617 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 21:16:51.165086   29617 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 21:16:51.165244   29617 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 21:16:51.666035   29617 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.408194ms
	I1011 21:16:51.666178   29617 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 21:16:58.166573   29617 kubeadm.go:310] [api-check] The API server is healthy after 6.502304408s
	I1011 21:16:58.179631   29617 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 21:16:58.195028   29617 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 21:16:58.220647   29617 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 21:16:58.220871   29617 kubeadm.go:310] [mark-control-plane] Marking the node ha-610874 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 21:16:58.236113   29617 kubeadm.go:310] [bootstrap-token] Using token: j1o64v.rjb74fe9bovjls5f
	I1011 21:16:58.237740   29617 out.go:235]   - Configuring RBAC rules ...
	I1011 21:16:58.237875   29617 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 21:16:58.245441   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 21:16:58.254162   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 21:16:58.259203   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 21:16:58.274345   29617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 21:16:58.278840   29617 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 21:16:58.578576   29617 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 21:16:59.008419   29617 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 21:16:59.573438   29617 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 21:16:59.574394   29617 kubeadm.go:310] 
	I1011 21:16:59.574519   29617 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 21:16:59.574537   29617 kubeadm.go:310] 
	I1011 21:16:59.574645   29617 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 21:16:59.574659   29617 kubeadm.go:310] 
	I1011 21:16:59.574685   29617 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 21:16:59.574753   29617 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 21:16:59.574825   29617 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 21:16:59.574836   29617 kubeadm.go:310] 
	I1011 21:16:59.574917   29617 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 21:16:59.574925   29617 kubeadm.go:310] 
	I1011 21:16:59.574988   29617 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 21:16:59.574998   29617 kubeadm.go:310] 
	I1011 21:16:59.575073   29617 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 21:16:59.575188   29617 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 21:16:59.575286   29617 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 21:16:59.575300   29617 kubeadm.go:310] 
	I1011 21:16:59.575406   29617 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 21:16:59.575519   29617 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 21:16:59.575533   29617 kubeadm.go:310] 
	I1011 21:16:59.575645   29617 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.575774   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 21:16:59.575812   29617 kubeadm.go:310] 	--control-plane 
	I1011 21:16:59.575825   29617 kubeadm.go:310] 
	I1011 21:16:59.575924   29617 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 21:16:59.575932   29617 kubeadm.go:310] 
	I1011 21:16:59.576044   29617 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j1o64v.rjb74fe9bovjls5f \
	I1011 21:16:59.576197   29617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 21:16:59.576985   29617 kubeadm.go:310] W1011 21:16:48.086167     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577396   29617 kubeadm.go:310] W1011 21:16:48.087109     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 21:16:59.577500   29617 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 21:16:59.577512   29617 cni.go:84] Creating CNI manager for ""
	I1011 21:16:59.577520   29617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 21:16:59.579873   29617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 21:16:59.581130   29617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 21:16:59.586500   29617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 21:16:59.586517   29617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 21:16:59.606073   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 21:16:59.978632   29617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 21:16:59.978713   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:16:59.978732   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874 minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=true
	I1011 21:17:00.174706   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:00.174708   29617 ops.go:34] apiserver oom_adj: -16
	I1011 21:17:00.675693   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.174849   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:01.675518   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.174832   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:02.674899   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.174904   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 21:17:03.254520   29617 kubeadm.go:1113] duration metric: took 3.275873473s to wait for elevateKubeSystemPrivileges
	I1011 21:17:03.254557   29617 kubeadm.go:394] duration metric: took 15.394633584s to StartCluster
	I1011 21:17:03.254574   29617 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.254667   29617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.255426   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:03.255658   29617 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:03.255670   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 21:17:03.255683   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:17:03.255698   29617 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 21:17:03.255784   29617 addons.go:69] Setting storage-provisioner=true in profile "ha-610874"
	I1011 21:17:03.255803   29617 addons.go:234] Setting addon storage-provisioner=true in "ha-610874"
	I1011 21:17:03.255807   29617 addons.go:69] Setting default-storageclass=true in profile "ha-610874"
	I1011 21:17:03.255835   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.255840   29617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-610874"
	I1011 21:17:03.255868   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:03.256287   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256300   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.256340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.256367   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.271522   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I1011 21:17:03.271689   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I1011 21:17:03.272056   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272154   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.272592   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272609   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272755   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.272784   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.272931   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273093   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.273112   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.273524   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.273562   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.275146   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:03.275352   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 21:17:03.275763   29617 cert_rotation.go:140] Starting client certificate rotation controller
	I1011 21:17:03.275942   29617 addons.go:234] Setting addon default-storageclass=true in "ha-610874"
	I1011 21:17:03.275971   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:03.276303   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.276340   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.288268   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1011 21:17:03.288701   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.289186   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.289212   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.289573   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.289758   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.290984   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1011 21:17:03.291476   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.291798   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.292035   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.292052   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.292353   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.292786   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:03.292827   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:03.293969   29617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 21:17:03.295203   29617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.295223   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 21:17:03.295241   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.298221   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298669   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.298695   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.298893   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.299039   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.299248   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.299371   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.307894   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33173
	I1011 21:17:03.308319   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:03.308780   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:03.308794   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:03.309115   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:03.309363   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:03.311112   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:03.311334   29617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.311352   29617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 21:17:03.311368   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:03.314487   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.314914   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:03.314938   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:03.315112   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:03.315274   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:03.315432   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:03.315580   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:03.390668   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 21:17:03.477039   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:17:03.523146   29617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:17:03.861068   29617 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1011 21:17:04.076843   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076867   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.076939   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.076960   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077121   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077129   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077152   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077162   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077170   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077198   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077208   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077216   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.077228   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.077423   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077435   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077497   29617 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 21:17:04.077512   29617 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 21:17:04.077537   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.077557   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.077562   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.077613   29617 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1011 21:17:04.077629   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.077640   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.077652   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.088649   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:04.089177   29617 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1011 21:17:04.089196   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:04.089204   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:04.089222   29617 round_trippers.go:473]     Content-Type: application/json
	I1011 21:17:04.089229   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:04.091300   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:04.091435   29617 main.go:141] libmachine: Making call to close driver server
	I1011 21:17:04.091450   29617 main.go:141] libmachine: (ha-610874) Calling .Close
	I1011 21:17:04.091679   29617 main.go:141] libmachine: (ha-610874) DBG | Closing plugin on server side
	I1011 21:17:04.091716   29617 main.go:141] libmachine: Successfully made call to close driver server
	I1011 21:17:04.091728   29617 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 21:17:04.093543   29617 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1011 21:17:04.094783   29617 addons.go:510] duration metric: took 839.089678ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1011 21:17:04.094816   29617 start.go:246] waiting for cluster config update ...
	I1011 21:17:04.094834   29617 start.go:255] writing updated cluster config ...
	I1011 21:17:04.096346   29617 out.go:201] 
	I1011 21:17:04.097685   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:04.097746   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.099389   29617 out.go:177] * Starting "ha-610874-m02" control-plane node in "ha-610874" cluster
	I1011 21:17:04.100656   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:17:04.100673   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:17:04.100774   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:17:04.100788   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:17:04.100851   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:04.100998   29617 start.go:360] acquireMachinesLock for ha-610874-m02: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:17:04.101042   29617 start.go:364] duration metric: took 25.742µs to acquireMachinesLock for "ha-610874-m02"
	I1011 21:17:04.101063   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:04.101132   29617 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1011 21:17:04.102447   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:17:04.102519   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:04.102554   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:04.117018   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I1011 21:17:04.117574   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:04.118020   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:04.118046   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:04.118342   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:04.118495   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:04.118627   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:04.118734   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:17:04.118757   29617 client.go:168] LocalClient.Create starting
	I1011 21:17:04.118782   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:17:04.118814   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118825   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118865   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:17:04.118883   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:17:04.118895   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:17:04.118909   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:17:04.118916   29617 main.go:141] libmachine: (ha-610874-m02) Calling .PreCreateCheck
	I1011 21:17:04.119022   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:04.119344   29617 main.go:141] libmachine: Creating machine...
	I1011 21:17:04.119354   29617 main.go:141] libmachine: (ha-610874-m02) Calling .Create
	I1011 21:17:04.119448   29617 main.go:141] libmachine: (ha-610874-m02) Creating KVM machine...
	I1011 21:17:04.120553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing default KVM network
	I1011 21:17:04.120665   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found existing private KVM network mk-ha-610874
	I1011 21:17:04.120779   29617 main.go:141] libmachine: (ha-610874-m02) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.120796   29617 main.go:141] libmachine: (ha-610874-m02) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:17:04.120855   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.120779   29991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.120961   29617 main.go:141] libmachine: (ha-610874-m02) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:17:04.350121   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.350001   29991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa...
	I1011 21:17:04.441541   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441397   29991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk...
	I1011 21:17:04.441576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing magic tar header
	I1011 21:17:04.441591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Writing SSH key tar header
	I1011 21:17:04.441603   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:04.441509   29991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 ...
	I1011 21:17:04.441619   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02
	I1011 21:17:04.441634   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:17:04.441650   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02 (perms=drwx------)
	I1011 21:17:04.441661   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:17:04.441676   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:17:04.441687   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:17:04.441702   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:17:04.441718   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:17:04.441730   29617 main.go:141] libmachine: (ha-610874-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:17:04.441739   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:17:04.441771   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:04.441793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:17:04.441805   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:17:04.441813   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Checking permissions on dir: /home
	I1011 21:17:04.441826   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Skipping /home - not owner
	I1011 21:17:04.442818   29617 main.go:141] libmachine: (ha-610874-m02) define libvirt domain using xml: 
	I1011 21:17:04.442835   29617 main.go:141] libmachine: (ha-610874-m02) <domain type='kvm'>
	I1011 21:17:04.442851   29617 main.go:141] libmachine: (ha-610874-m02)   <name>ha-610874-m02</name>
	I1011 21:17:04.442859   29617 main.go:141] libmachine: (ha-610874-m02)   <memory unit='MiB'>2200</memory>
	I1011 21:17:04.442867   29617 main.go:141] libmachine: (ha-610874-m02)   <vcpu>2</vcpu>
	I1011 21:17:04.442876   29617 main.go:141] libmachine: (ha-610874-m02)   <features>
	I1011 21:17:04.442884   29617 main.go:141] libmachine: (ha-610874-m02)     <acpi/>
	I1011 21:17:04.442894   29617 main.go:141] libmachine: (ha-610874-m02)     <apic/>
	I1011 21:17:04.442901   29617 main.go:141] libmachine: (ha-610874-m02)     <pae/>
	I1011 21:17:04.442909   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.442916   29617 main.go:141] libmachine: (ha-610874-m02)   </features>
	I1011 21:17:04.442924   29617 main.go:141] libmachine: (ha-610874-m02)   <cpu mode='host-passthrough'>
	I1011 21:17:04.442929   29617 main.go:141] libmachine: (ha-610874-m02)   
	I1011 21:17:04.442935   29617 main.go:141] libmachine: (ha-610874-m02)   </cpu>
	I1011 21:17:04.442940   29617 main.go:141] libmachine: (ha-610874-m02)   <os>
	I1011 21:17:04.442944   29617 main.go:141] libmachine: (ha-610874-m02)     <type>hvm</type>
	I1011 21:17:04.442949   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='cdrom'/>
	I1011 21:17:04.442953   29617 main.go:141] libmachine: (ha-610874-m02)     <boot dev='hd'/>
	I1011 21:17:04.442958   29617 main.go:141] libmachine: (ha-610874-m02)     <bootmenu enable='no'/>
	I1011 21:17:04.442966   29617 main.go:141] libmachine: (ha-610874-m02)   </os>
	I1011 21:17:04.442970   29617 main.go:141] libmachine: (ha-610874-m02)   <devices>
	I1011 21:17:04.442975   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='cdrom'>
	I1011 21:17:04.442982   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/boot2docker.iso'/>
	I1011 21:17:04.442988   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hdc' bus='scsi'/>
	I1011 21:17:04.442992   29617 main.go:141] libmachine: (ha-610874-m02)       <readonly/>
	I1011 21:17:04.442999   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443009   29617 main.go:141] libmachine: (ha-610874-m02)     <disk type='file' device='disk'>
	I1011 21:17:04.443018   29617 main.go:141] libmachine: (ha-610874-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:17:04.443028   29617 main.go:141] libmachine: (ha-610874-m02)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/ha-610874-m02.rawdisk'/>
	I1011 21:17:04.443033   29617 main.go:141] libmachine: (ha-610874-m02)       <target dev='hda' bus='virtio'/>
	I1011 21:17:04.443037   29617 main.go:141] libmachine: (ha-610874-m02)     </disk>
	I1011 21:17:04.443042   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443047   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='mk-ha-610874'/>
	I1011 21:17:04.443052   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443057   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443061   29617 main.go:141] libmachine: (ha-610874-m02)     <interface type='network'>
	I1011 21:17:04.443066   29617 main.go:141] libmachine: (ha-610874-m02)       <source network='default'/>
	I1011 21:17:04.443071   29617 main.go:141] libmachine: (ha-610874-m02)       <model type='virtio'/>
	I1011 21:17:04.443076   29617 main.go:141] libmachine: (ha-610874-m02)     </interface>
	I1011 21:17:04.443080   29617 main.go:141] libmachine: (ha-610874-m02)     <serial type='pty'>
	I1011 21:17:04.443085   29617 main.go:141] libmachine: (ha-610874-m02)       <target port='0'/>
	I1011 21:17:04.443089   29617 main.go:141] libmachine: (ha-610874-m02)     </serial>
	I1011 21:17:04.443094   29617 main.go:141] libmachine: (ha-610874-m02)     <console type='pty'>
	I1011 21:17:04.443099   29617 main.go:141] libmachine: (ha-610874-m02)       <target type='serial' port='0'/>
	I1011 21:17:04.443103   29617 main.go:141] libmachine: (ha-610874-m02)     </console>
	I1011 21:17:04.443109   29617 main.go:141] libmachine: (ha-610874-m02)     <rng model='virtio'>
	I1011 21:17:04.443137   29617 main.go:141] libmachine: (ha-610874-m02)       <backend model='random'>/dev/random</backend>
	I1011 21:17:04.443157   29617 main.go:141] libmachine: (ha-610874-m02)     </rng>
	I1011 21:17:04.443167   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443173   29617 main.go:141] libmachine: (ha-610874-m02)     
	I1011 21:17:04.443189   29617 main.go:141] libmachine: (ha-610874-m02)   </devices>
	I1011 21:17:04.443198   29617 main.go:141] libmachine: (ha-610874-m02) </domain>
	I1011 21:17:04.443208   29617 main.go:141] libmachine: (ha-610874-m02) 
	I1011 21:17:04.449596   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f0:af:4d in network default
	I1011 21:17:04.450115   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring networks are active...
	I1011 21:17:04.450137   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:04.450871   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network default is active
	I1011 21:17:04.451172   29617 main.go:141] libmachine: (ha-610874-m02) Ensuring network mk-ha-610874 is active
	I1011 21:17:04.451696   29617 main.go:141] libmachine: (ha-610874-m02) Getting domain xml...
	I1011 21:17:04.452466   29617 main.go:141] libmachine: (ha-610874-m02) Creating domain...
	I1011 21:17:05.723228   29617 main.go:141] libmachine: (ha-610874-m02) Waiting to get IP...
	I1011 21:17:05.723997   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.724437   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.724489   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.724421   29991 retry.go:31] will retry after 216.617717ms: waiting for machine to come up
	I1011 21:17:05.943023   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:05.943470   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:05.943493   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:05.943418   29991 retry.go:31] will retry after 323.475706ms: waiting for machine to come up
	I1011 21:17:06.268759   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.269130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.269185   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.269071   29991 retry.go:31] will retry after 341.815784ms: waiting for machine to come up
	I1011 21:17:06.612587   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:06.613044   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:06.613069   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:06.612994   29991 retry.go:31] will retry after 575.567056ms: waiting for machine to come up
	I1011 21:17:07.189626   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.190024   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.190052   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.189979   29991 retry.go:31] will retry after 508.01524ms: waiting for machine to come up
	I1011 21:17:07.699512   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:07.699870   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:07.699896   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:07.699824   29991 retry.go:31] will retry after 706.438375ms: waiting for machine to come up
	I1011 21:17:08.408130   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:08.408534   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:08.408553   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:08.408491   29991 retry.go:31] will retry after 819.845939ms: waiting for machine to come up
	I1011 21:17:09.229809   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:09.230337   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:09.230361   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:09.230274   29991 retry.go:31] will retry after 1.08916769s: waiting for machine to come up
	I1011 21:17:10.320875   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:10.321316   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:10.321344   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:10.321274   29991 retry.go:31] will retry after 1.825013226s: waiting for machine to come up
	I1011 21:17:12.148418   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:12.148892   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:12.148912   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:12.148854   29991 retry.go:31] will retry after 1.911054739s: waiting for machine to come up
	I1011 21:17:14.062931   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:14.063353   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:14.063381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:14.063300   29991 retry.go:31] will retry after 2.512289875s: waiting for machine to come up
	I1011 21:17:16.577169   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:16.577555   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:16.577580   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:16.577519   29991 retry.go:31] will retry after 3.376491238s: waiting for machine to come up
	I1011 21:17:19.955606   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:19.955968   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find current IP address of domain ha-610874-m02 in network mk-ha-610874
	I1011 21:17:19.955995   29617 main.go:141] libmachine: (ha-610874-m02) DBG | I1011 21:17:19.955923   29991 retry.go:31] will retry after 4.049589987s: waiting for machine to come up
	I1011 21:17:24.010143   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010574   29617 main.go:141] libmachine: (ha-610874-m02) Found IP for machine: 192.168.39.11
	I1011 21:17:24.010593   29617 main.go:141] libmachine: (ha-610874-m02) Reserving static IP address...
	I1011 21:17:24.010602   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.010971   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "ha-610874-m02", mac: "52:54:00:f3:cf:5a", ip: "192.168.39.11"} in network mk-ha-610874
	I1011 21:17:24.079043   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:24.079077   29617 main.go:141] libmachine: (ha-610874-m02) Reserved static IP address: 192.168.39.11
	I1011 21:17:24.079093   29617 main.go:141] libmachine: (ha-610874-m02) Waiting for SSH to be available...
	I1011 21:17:24.081543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:24.081867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874
	I1011 21:17:24.081880   29617 main.go:141] libmachine: (ha-610874-m02) DBG | unable to find defined IP address of network mk-ha-610874 interface with MAC address 52:54:00:f3:cf:5a
	I1011 21:17:24.082047   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:24.082076   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:24.082376   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:24.082572   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:24.082591   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:24.086567   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: exit status 255: 
	I1011 21:17:24.086597   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 21:17:24.086608   29617 main.go:141] libmachine: (ha-610874-m02) DBG | command : exit 0
	I1011 21:17:24.086627   29617 main.go:141] libmachine: (ha-610874-m02) DBG | err     : exit status 255
	I1011 21:17:24.086641   29617 main.go:141] libmachine: (ha-610874-m02) DBG | output  : 
	I1011 21:17:27.089089   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Getting to WaitForSSH function...
	I1011 21:17:27.091628   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.091976   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.092001   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.092162   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH client type: external
	I1011 21:17:27.092189   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa (-rw-------)
	I1011 21:17:27.092213   29617 main.go:141] libmachine: (ha-610874-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:17:27.092221   29617 main.go:141] libmachine: (ha-610874-m02) DBG | About to run SSH command:
	I1011 21:17:27.092230   29617 main.go:141] libmachine: (ha-610874-m02) DBG | exit 0
	I1011 21:17:27.218963   29617 main.go:141] libmachine: (ha-610874-m02) DBG | SSH cmd err, output: <nil>: 
	I1011 21:17:27.219245   29617 main.go:141] libmachine: (ha-610874-m02) KVM machine creation complete!
	I1011 21:17:27.219616   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:27.220149   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220344   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:27.220511   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:17:27.220532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetState
	I1011 21:17:27.221755   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:17:27.221770   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:17:27.221778   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:17:27.221786   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.223867   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224229   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.224267   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.224374   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.224532   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224655   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.224768   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.224964   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.225164   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.225177   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:17:27.333813   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.333841   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:17:27.333852   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.336538   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.336885   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.336909   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.337071   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.337262   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337411   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.337545   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.337696   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.337866   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.337878   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:17:27.447511   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:17:27.447576   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:17:27.447583   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:17:27.447590   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.447842   29617 buildroot.go:166] provisioning hostname "ha-610874-m02"
	I1011 21:17:27.447866   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.448033   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.450381   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450763   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.450793   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.450924   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.451086   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451309   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.451419   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.451547   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.451737   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.451749   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m02 && echo "ha-610874-m02" | sudo tee /etc/hostname
	I1011 21:17:27.572801   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m02
	
	I1011 21:17:27.572834   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.575352   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575751   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.575776   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.575941   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.576093   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576220   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.576346   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.576461   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:27.576637   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:27.576661   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:17:27.695886   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:17:27.695916   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:17:27.695938   29617 buildroot.go:174] setting up certificates
	I1011 21:17:27.695952   29617 provision.go:84] configureAuth start
	I1011 21:17:27.695968   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetMachineName
	I1011 21:17:27.696239   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:27.698924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699311   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.699342   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.699459   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.701614   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.701924   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.701942   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.702129   29617 provision.go:143] copyHostCerts
	I1011 21:17:27.702158   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702190   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:17:27.702199   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:17:27.702263   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:17:27.702355   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702381   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:17:27.702389   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:17:27.702438   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:17:27.702535   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702560   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:17:27.702567   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:17:27.702604   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:17:27.702691   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m02 san=[127.0.0.1 192.168.39.11 ha-610874-m02 localhost minikube]
	I1011 21:17:27.916455   29617 provision.go:177] copyRemoteCerts
	I1011 21:17:27.916517   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:17:27.916546   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:27.919220   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919586   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:27.919612   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:27.919767   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:27.919931   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:27.920084   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:27.920214   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.005137   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:17:28.005206   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:17:28.030798   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:17:28.030868   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:17:28.053929   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:17:28.053992   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:17:28.077344   29617 provision.go:87] duration metric: took 381.381213ms to configureAuth
	I1011 21:17:28.077368   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:17:28.077553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:28.077631   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.079998   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080363   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.080391   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.080550   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.080711   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080860   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.080957   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.081126   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.081276   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.081289   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:17:28.305072   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:17:28.305099   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:17:28.305107   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetURL
	I1011 21:17:28.306348   29617 main.go:141] libmachine: (ha-610874-m02) DBG | Using libvirt version 6000000
	I1011 21:17:28.308766   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309119   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.309148   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.309322   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:17:28.309336   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:17:28.309345   29617 client.go:171] duration metric: took 24.190578436s to LocalClient.Create
	I1011 21:17:28.309369   29617 start.go:167] duration metric: took 24.190632715s to libmachine.API.Create "ha-610874"
	I1011 21:17:28.309380   29617 start.go:293] postStartSetup for "ha-610874-m02" (driver="kvm2")
	I1011 21:17:28.309393   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:17:28.309414   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.309649   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:17:28.309678   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.311900   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312234   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.312257   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.312366   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.312513   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.312670   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.312813   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.401258   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:17:28.405713   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:17:28.405741   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:17:28.405819   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:17:28.405893   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:17:28.405901   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:17:28.405976   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:17:28.415792   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:28.439288   29617 start.go:296] duration metric: took 129.894011ms for postStartSetup
	I1011 21:17:28.439338   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetConfigRaw
	I1011 21:17:28.439884   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.442343   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442733   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.442761   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.442929   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:17:28.443099   29617 start.go:128] duration metric: took 24.341953324s to createHost
	I1011 21:17:28.443119   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.445585   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.445871   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.445894   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.446037   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.446185   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446313   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.446509   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.446712   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:17:28.446859   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1011 21:17:28.446869   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:17:28.555655   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681448.532334020
	
	I1011 21:17:28.555684   29617 fix.go:216] guest clock: 1728681448.532334020
	I1011 21:17:28.555698   29617 fix.go:229] Guest: 2024-10-11 21:17:28.53233402 +0000 UTC Remote: 2024-10-11 21:17:28.443109707 +0000 UTC m=+72.164953096 (delta=89.224313ms)
	I1011 21:17:28.555717   29617 fix.go:200] guest clock delta is within tolerance: 89.224313ms
	I1011 21:17:28.555723   29617 start.go:83] releasing machines lock for "ha-610874-m02", held for 24.454670186s
	I1011 21:17:28.555747   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.555979   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:28.558215   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.558576   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.558610   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.560996   29617 out.go:177] * Found network options:
	I1011 21:17:28.562345   29617 out.go:177]   - NO_PROXY=192.168.39.10
	W1011 21:17:28.563437   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.563463   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.563914   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564081   29617 main.go:141] libmachine: (ha-610874-m02) Calling .DriverName
	I1011 21:17:28.564167   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:17:28.564198   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	W1011 21:17:28.564293   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:17:28.564371   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:17:28.564394   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHHostname
	I1011 21:17:28.566543   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566887   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.566920   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.566948   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567066   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567235   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567341   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:28.567349   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567359   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:28.567462   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.567515   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHPort
	I1011 21:17:28.567649   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHKeyPath
	I1011 21:17:28.567774   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetSSHUsername
	I1011 21:17:28.567889   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m02/id_rsa Username:docker}
	I1011 21:17:28.804794   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:17:28.816172   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:17:28.816234   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:17:28.833684   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:17:28.833707   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:17:28.833785   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:17:28.850682   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:17:28.865268   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:17:28.865314   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:17:28.879804   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:17:28.893790   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:17:29.005060   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:17:29.161552   29617 docker.go:233] disabling docker service ...
	I1011 21:17:29.161623   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:17:29.176030   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:17:29.188905   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:17:29.314012   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:17:29.444969   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:17:29.458929   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:17:29.477279   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:17:29.477336   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.487485   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:17:29.487557   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.497725   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.508074   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.518078   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:17:29.528405   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.538441   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.555119   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:17:29.568308   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:17:29.578239   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:17:29.578297   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:17:29.591777   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:17:29.601766   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:29.733693   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:17:29.832686   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:17:29.832769   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:17:29.837474   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:17:29.837531   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:17:29.841328   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:17:29.885910   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:17:29.885997   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.915959   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:17:29.947445   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:17:29.948743   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:17:29.949776   29617 main.go:141] libmachine: (ha-610874-m02) Calling .GetIP
	I1011 21:17:29.952438   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952742   29617 main.go:141] libmachine: (ha-610874-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:cf:5a", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:17:18 +0000 UTC Type:0 Mac:52:54:00:f3:cf:5a Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-610874-m02 Clientid:01:52:54:00:f3:cf:5a}
	I1011 21:17:29.952767   29617 main.go:141] libmachine: (ha-610874-m02) DBG | domain ha-610874-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:f3:cf:5a in network mk-ha-610874
	I1011 21:17:29.952926   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:17:29.957045   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:29.969401   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:17:29.969618   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:29.969904   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.969953   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:29.984875   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I1011 21:17:29.985308   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:29.985749   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:29.985772   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:29.986088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:29.986307   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:17:29.987951   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:29.988270   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:29.988309   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:30.002903   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I1011 21:17:30.003325   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:30.003771   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:30.003791   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:30.004088   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:30.004322   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:30.004478   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.11
	I1011 21:17:30.004490   29617 certs.go:194] generating shared ca certs ...
	I1011 21:17:30.004507   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.004645   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:17:30.004706   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:17:30.004720   29617 certs.go:256] generating profile certs ...
	I1011 21:17:30.004812   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:17:30.004845   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a
	I1011 21:17:30.004865   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.254]
	I1011 21:17:30.068798   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a ...
	I1011 21:17:30.068829   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a: {Name:mk7e577273a37f1215e925a89aaf2057d9d70c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069010   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a ...
	I1011 21:17:30.069026   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a: {Name:mk272cb1eed2069075ccbf59f795f6618abcd353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:17:30.069135   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:17:30.069298   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.7c2d201a -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:17:30.069453   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:17:30.069470   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:17:30.069497   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:17:30.069514   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:17:30.069533   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:17:30.069553   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:17:30.069571   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:17:30.069589   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:17:30.069614   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:17:30.069674   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:17:30.069714   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:17:30.069727   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:17:30.069761   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:17:30.069795   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:17:30.069830   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:17:30.069888   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:17:30.069930   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.069950   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.069968   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.070008   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:30.073028   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073411   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:30.073439   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:30.073677   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:30.073887   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:30.074102   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:30.074339   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:30.150977   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:17:30.155841   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:17:30.167973   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:17:30.172398   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:17:30.183178   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:17:30.187494   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:17:30.198396   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:17:30.202690   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:17:30.213924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:17:30.218228   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:17:30.229999   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:17:30.234409   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:17:30.246054   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:17:30.271630   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:17:30.295598   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:17:30.320158   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:17:30.346169   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1011 21:17:30.370669   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 21:17:30.396095   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:17:30.424361   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:17:30.449179   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:17:30.473592   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:17:30.497140   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:17:30.520773   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:17:30.537475   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:17:30.553696   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:17:30.573515   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:17:30.591050   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:17:30.607456   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:17:30.623663   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:17:30.639999   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:17:30.645863   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:17:30.656839   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661661   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.661737   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:17:30.667927   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:17:30.678586   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:17:30.690465   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695106   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.695178   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:17:30.700843   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:17:30.711530   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:17:30.722262   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726883   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.726930   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:17:30.732484   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:17:30.743130   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:17:30.747324   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:17:30.747378   29617 kubeadm.go:934] updating node {m02 192.168.39.11 8443 v1.31.1 crio true true} ...
	I1011 21:17:30.747471   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:17:30.747503   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:17:30.747550   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:17:30.764827   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:17:30.764898   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:17:30.764958   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.774946   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:17:30.775004   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:17:30.785084   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:17:30.785115   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785173   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:17:30.785210   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1011 21:17:30.785254   29617 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1011 21:17:30.789999   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:17:30.790028   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:17:31.801070   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.801149   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:17:31.806312   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:17:31.806341   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:17:31.977093   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:17:32.035477   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.035590   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:17:32.049208   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:17:32.049241   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:17:32.383282   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:17:32.393090   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1011 21:17:32.409524   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:17:32.426347   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:17:32.443202   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:17:32.447193   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:17:32.459719   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:32.593682   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:32.611619   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:17:32.611941   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:17:32.611988   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:17:32.626650   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1011 21:17:32.627104   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:17:32.627665   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:17:32.627681   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:17:32.627997   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:17:32.628209   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:17:32.628355   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:17:32.628464   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:17:32.628490   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:17:32.631170   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631565   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:17:32.631594   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:17:32.631751   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:17:32.631931   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:17:32.632068   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:17:32.632206   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:17:32.785858   29617 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:32.785905   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443"
	I1011 21:17:54.047983   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token adld5m.tsti4kephgxnkkbf --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443": (21.262048482s)
	I1011 21:17:54.048020   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:17:54.524404   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m02 minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:17:54.662523   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:17:54.782630   29617 start.go:319] duration metric: took 22.154260063s to joinCluster
	I1011 21:17:54.782703   29617 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:17:54.782988   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:17:54.784979   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:17:54.786144   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:17:55.109738   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:17:55.128457   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:17:55.128804   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:17:55.128882   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:17:55.129129   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:17:55.129231   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.129241   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.129252   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.129258   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.140234   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:17:55.629803   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:55.629830   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:55.629841   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:55.629847   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:55.633275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.129516   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.129541   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.129552   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.129559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.132902   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:56.629511   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:56.629534   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:56.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:56.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:56.634698   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:17:57.129572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.129597   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.129605   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.129609   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.132668   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:57.133230   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:57.629639   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:57.629659   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:57.629667   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:57.629670   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:57.632880   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:58.129393   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.129417   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.129441   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.129446   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.132403   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:17:58.629999   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:58.630018   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:58.630026   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:58.630030   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:58.633746   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.130079   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.130096   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.130104   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.130108   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.133281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:17:59.133973   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:17:59.629323   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:17:59.629347   29617 round_trippers.go:469] Request Headers:
	I1011 21:17:59.629358   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:17:59.629364   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:17:59.632796   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.129728   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.129749   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.129758   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.129767   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.133151   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:00.629977   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:00.630003   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:00.630015   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:00.630021   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:00.633099   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.130138   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.130160   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.130171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.130182   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.133307   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:01.134143   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:01.630135   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:01.630158   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:01.630171   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:01.630177   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:01.634516   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:02.129957   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.129977   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.129985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.129990   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.209108   29617 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I1011 21:18:02.630223   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:02.630241   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:02.630249   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:02.630254   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:02.633360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:03.130145   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.130165   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.130172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.130176   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.134521   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:03.135482   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:03.630325   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:03.630348   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:03.630357   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:03.630363   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:03.633906   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.129848   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.129869   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.129880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.129885   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.133353   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:04.630352   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:04.630378   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:04.630391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:04.630395   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:04.633784   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:05.129622   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.129647   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.129658   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.129664   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.174718   29617 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1011 21:18:05.175206   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:05.629573   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:05.629601   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:05.629610   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:05.629614   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:05.633377   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.129366   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.129388   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.129396   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.129399   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.132592   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:06.630152   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:06.630174   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:06.630184   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:06.630190   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:06.633604   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.130251   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.130280   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.130292   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.130299   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.133640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.629546   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:07.629568   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:07.629578   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:07.629583   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:07.632932   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:07.633891   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:08.129786   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.129803   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.129811   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.129815   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.133290   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:08.629506   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:08.629533   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:08.629544   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:08.629548   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:08.633075   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.129541   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.129559   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.129567   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.129572   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.132640   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:09.629665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:09.629684   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:09.629692   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:09.629697   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:09.632858   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:10.129866   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.129885   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.129893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.129897   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.132615   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:10.133150   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:10.629443   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:10.629475   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:10.629489   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:10.629493   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:10.632970   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.130002   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.130024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.130032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.130035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.133677   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:11.629439   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:11.629465   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:11.629477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:11.629482   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:11.632816   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.130049   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.130071   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.130080   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.130083   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.133179   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:12.133716   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:12.630085   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:12.630110   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:12.630121   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:12.630127   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:12.633114   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:13.130226   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.130245   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.130253   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.130258   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.133707   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:13.629976   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:13.630005   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:13.630016   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:13.630022   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:13.633601   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.129823   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.129846   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.129857   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.129863   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.132927   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.630032   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:14.630053   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:14.630062   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:14.630070   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:14.633208   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:14.633750   29617 node_ready.go:53] node "ha-610874-m02" has status "Ready":"False"
	I1011 21:18:15.129885   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.129909   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.129919   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.129924   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.132958   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.630000   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.630024   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.630032   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.630035   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.632986   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.633633   29617 node_ready.go:49] node "ha-610874-m02" has status "Ready":"True"
	I1011 21:18:15.633647   29617 node_ready.go:38] duration metric: took 20.504503338s for node "ha-610874-m02" to be "Ready" ...
	I1011 21:18:15.633655   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:15.633709   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:15.633718   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.633724   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.633728   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.637582   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:15.643886   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.643972   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:18:15.643983   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.643993   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.643999   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.646763   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.647514   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.647529   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.647536   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.647539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.649945   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.650586   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.650602   29617 pod_ready.go:82] duration metric: took 6.694777ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650623   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.650679   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:18:15.650688   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.650699   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.650707   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.652943   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.653673   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.653687   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.653696   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.653701   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.655886   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.656382   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.656397   29617 pod_ready.go:82] duration metric: took 5.765488ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656405   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.656451   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:18:15.656461   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.656471   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.656477   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.658729   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.659391   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:15.659409   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.659419   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.659426   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.661629   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.662114   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.662130   29617 pod_ready.go:82] duration metric: took 5.719352ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662137   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.662181   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:18:15.662190   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.662197   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.662201   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.664800   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:15.665273   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:15.665286   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.665294   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.665298   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.667272   29617 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1011 21:18:15.667736   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:15.667757   29617 pod_ready.go:82] duration metric: took 5.613486ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.667773   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:15.830074   29617 request.go:632] Waited for 162.243136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830160   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:18:15.830168   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:15.830178   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:15.830188   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:15.833590   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.030666   29617 request.go:632] Waited for 196.378996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030722   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.030728   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.030735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.030739   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.033962   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.034580   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.034599   29617 pod_ready.go:82] duration metric: took 366.81416ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.034608   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.230672   29617 request.go:632] Waited for 195.982779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230778   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:18:16.230790   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.230801   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.230810   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.234030   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.430609   29617 request.go:632] Waited for 195.69013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430701   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:16.430712   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.430735   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.433742   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.434219   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.434239   29617 pod_ready.go:82] duration metric: took 399.609699ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.434252   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.630260   29617 request.go:632] Waited for 195.941074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630337   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:18:16.630342   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.630350   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.630357   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.633657   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:16.830752   29617 request.go:632] Waited for 196.369395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830804   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:16.830811   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:16.830820   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:16.830827   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:16.833807   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:16.834437   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:16.834455   29617 pod_ready.go:82] duration metric: took 400.195609ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:16.834465   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.030516   29617 request.go:632] Waited for 195.993213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030589   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:18:17.030595   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.030607   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.030627   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.034122   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.230257   29617 request.go:632] Waited for 195.302255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230322   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.230329   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.230337   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.230342   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.233560   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.234217   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.234239   29617 pod_ready.go:82] duration metric: took 399.767293ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.234256   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.430433   29617 request.go:632] Waited for 196.107897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430509   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:18:17.430515   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.430526   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.430534   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.434262   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.630356   29617 request.go:632] Waited for 195.345057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630426   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:17.630431   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.630439   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.630444   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.633591   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:17.634036   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:17.634054   29617 pod_ready.go:82] duration metric: took 399.790817ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.634064   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:17.830520   29617 request.go:632] Waited for 196.385742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830591   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:18:17.830596   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:17.830603   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:17.830607   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:17.833974   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.030999   29617 request.go:632] Waited for 196.369359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031062   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.031068   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.031075   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.031079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.034522   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.035045   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.035060   29617 pod_ready.go:82] duration metric: took 400.990689ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.035069   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.230101   29617 request.go:632] Waited for 194.964535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230173   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:18:18.230179   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.230187   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.230191   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.233153   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:18:18.430174   29617 request.go:632] Waited for 196.304225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430252   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:18:18.430258   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.430265   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.430271   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.433684   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.434857   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.434876   29617 pod_ready.go:82] duration metric: took 399.800525ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.434886   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.630997   29617 request.go:632] Waited for 196.051862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631067   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:18:18.631072   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.631079   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.631090   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.634569   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.830555   29617 request.go:632] Waited for 195.378028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830645   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:18:18.830652   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.830659   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.830665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.834017   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:18.834881   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:18:18.834901   29617 pod_ready.go:82] duration metric: took 400.009355ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:18:18.834913   29617 pod_ready.go:39] duration metric: took 3.201246724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:18:18.834925   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:18:18.834977   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:18:18.851851   29617 api_server.go:72] duration metric: took 24.069111498s to wait for apiserver process to appear ...
	I1011 21:18:18.851878   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:18:18.851897   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:18:18.856543   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:18:18.856610   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:18:18.856615   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:18.856622   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:18.856626   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:18.857613   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:18:18.857701   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:18:18.857721   29617 api_server.go:131] duration metric: took 5.836547ms to wait for apiserver health ...
	I1011 21:18:18.857730   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:18:19.030066   29617 request.go:632] Waited for 172.254223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030130   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.030136   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.030143   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.030148   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.034696   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.039508   29617 system_pods.go:59] 17 kube-system pods found
	I1011 21:18:19.039540   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.039546   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.039551   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.039557   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.039561   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.039566   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.039570   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.039579   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.039584   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.039592   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.039597   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.039601   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.039606   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.039612   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.039615   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.039619   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.039622   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.039631   29617 system_pods.go:74] duration metric: took 181.896084ms to wait for pod list to return data ...
	I1011 21:18:19.039640   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:18:19.230981   29617 request.go:632] Waited for 191.269571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231051   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:18:19.231057   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.231064   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.231067   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.235209   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:18:19.235407   29617 default_sa.go:45] found service account: "default"
	I1011 21:18:19.235421   29617 default_sa.go:55] duration metric: took 195.775642ms for default service account to be created ...
	I1011 21:18:19.235428   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:18:19.430605   29617 request.go:632] Waited for 195.123077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430704   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:18:19.430710   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.430718   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.430723   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.435793   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:18:19.439894   29617 system_pods.go:86] 17 kube-system pods found
	I1011 21:18:19.439921   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:18:19.439929   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:18:19.439935   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:18:19.439942   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:18:19.439947   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:18:19.439953   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:18:19.439959   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:18:19.439965   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:18:19.439972   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:18:19.439980   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:18:19.439986   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:18:19.439995   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:18:19.440002   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:18:19.440010   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:18:19.440016   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:18:19.440020   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:18:19.440025   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:18:19.440033   29617 system_pods.go:126] duration metric: took 204.599583ms to wait for k8s-apps to be running ...
	I1011 21:18:19.440045   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:18:19.440094   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:19.455815   29617 system_svc.go:56] duration metric: took 15.763998ms WaitForService to wait for kubelet
	I1011 21:18:19.455841   29617 kubeadm.go:582] duration metric: took 24.673107672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:18:19.455860   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:18:19.630302   29617 request.go:632] Waited for 174.358774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630357   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:18:19.630364   29617 round_trippers.go:469] Request Headers:
	I1011 21:18:19.630372   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:18:19.630379   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:18:19.634356   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:18:19.635316   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635343   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635358   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:18:19.635363   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:18:19.635371   29617 node_conditions.go:105] duration metric: took 179.50548ms to run NodePressure ...
	I1011 21:18:19.635384   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:18:19.635415   29617 start.go:255] writing updated cluster config ...
	I1011 21:18:19.637553   29617 out.go:201] 
	I1011 21:18:19.638933   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:19.639018   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.640415   29617 out.go:177] * Starting "ha-610874-m03" control-plane node in "ha-610874" cluster
	I1011 21:18:19.641511   29617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:18:19.641529   29617 cache.go:56] Caching tarball of preloaded images
	I1011 21:18:19.641627   29617 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:18:19.641638   29617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:18:19.641712   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:19.641856   29617 start.go:360] acquireMachinesLock for ha-610874-m03: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:18:19.641897   29617 start.go:364] duration metric: took 24.129µs to acquireMachinesLock for "ha-610874-m03"
	I1011 21:18:19.641912   29617 start.go:93] Provisioning new machine with config: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:19.642000   29617 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1011 21:18:19.643322   29617 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 21:18:19.643394   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:19.643424   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:19.657905   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I1011 21:18:19.658394   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:19.658868   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:19.658887   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:19.659186   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:19.659360   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:19.659497   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:19.659661   29617 start.go:159] libmachine.API.Create for "ha-610874" (driver="kvm2")
	I1011 21:18:19.659689   29617 client.go:168] LocalClient.Create starting
	I1011 21:18:19.659716   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 21:18:19.659744   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659756   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659802   29617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 21:18:19.659820   29617 main.go:141] libmachine: Decoding PEM data...
	I1011 21:18:19.659830   29617 main.go:141] libmachine: Parsing certificate...
	I1011 21:18:19.659844   29617 main.go:141] libmachine: Running pre-create checks...
	I1011 21:18:19.659851   29617 main.go:141] libmachine: (ha-610874-m03) Calling .PreCreateCheck
	I1011 21:18:19.659994   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:19.660351   29617 main.go:141] libmachine: Creating machine...
	I1011 21:18:19.660362   29617 main.go:141] libmachine: (ha-610874-m03) Calling .Create
	I1011 21:18:19.660504   29617 main.go:141] libmachine: (ha-610874-m03) Creating KVM machine...
	I1011 21:18:19.661678   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing default KVM network
	I1011 21:18:19.661785   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found existing private KVM network mk-ha-610874
	I1011 21:18:19.661907   29617 main.go:141] libmachine: (ha-610874-m03) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.661930   29617 main.go:141] libmachine: (ha-610874-m03) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 21:18:19.662023   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.661913   30793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.662086   29617 main.go:141] libmachine: (ha-610874-m03) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 21:18:19.893907   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.893764   30793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa...
	I1011 21:18:19.985249   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985139   30793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk...
	I1011 21:18:19.985285   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing magic tar header
	I1011 21:18:19.985300   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Writing SSH key tar header
	I1011 21:18:19.985311   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:19.985257   30793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 ...
	I1011 21:18:19.985329   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03
	I1011 21:18:19.985350   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03 (perms=drwx------)
	I1011 21:18:19.985373   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 21:18:19.985396   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:18:19.985411   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 21:18:19.985426   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 21:18:19.985434   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 21:18:19.985440   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 21:18:19.985456   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 21:18:19.985468   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home/jenkins
	I1011 21:18:19.985478   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 21:18:19.985499   29617 main.go:141] libmachine: (ha-610874-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 21:18:19.985509   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:19.985516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Checking permissions on dir: /home
	I1011 21:18:19.985526   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Skipping /home - not owner
	I1011 21:18:19.986460   29617 main.go:141] libmachine: (ha-610874-m03) define libvirt domain using xml: 
	I1011 21:18:19.986487   29617 main.go:141] libmachine: (ha-610874-m03) <domain type='kvm'>
	I1011 21:18:19.986497   29617 main.go:141] libmachine: (ha-610874-m03)   <name>ha-610874-m03</name>
	I1011 21:18:19.986505   29617 main.go:141] libmachine: (ha-610874-m03)   <memory unit='MiB'>2200</memory>
	I1011 21:18:19.986513   29617 main.go:141] libmachine: (ha-610874-m03)   <vcpu>2</vcpu>
	I1011 21:18:19.986528   29617 main.go:141] libmachine: (ha-610874-m03)   <features>
	I1011 21:18:19.986539   29617 main.go:141] libmachine: (ha-610874-m03)     <acpi/>
	I1011 21:18:19.986547   29617 main.go:141] libmachine: (ha-610874-m03)     <apic/>
	I1011 21:18:19.986559   29617 main.go:141] libmachine: (ha-610874-m03)     <pae/>
	I1011 21:18:19.986567   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.986578   29617 main.go:141] libmachine: (ha-610874-m03)   </features>
	I1011 21:18:19.986587   29617 main.go:141] libmachine: (ha-610874-m03)   <cpu mode='host-passthrough'>
	I1011 21:18:19.986598   29617 main.go:141] libmachine: (ha-610874-m03)   
	I1011 21:18:19.986605   29617 main.go:141] libmachine: (ha-610874-m03)   </cpu>
	I1011 21:18:19.986657   29617 main.go:141] libmachine: (ha-610874-m03)   <os>
	I1011 21:18:19.986683   29617 main.go:141] libmachine: (ha-610874-m03)     <type>hvm</type>
	I1011 21:18:19.986694   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='cdrom'/>
	I1011 21:18:19.986706   29617 main.go:141] libmachine: (ha-610874-m03)     <boot dev='hd'/>
	I1011 21:18:19.986714   29617 main.go:141] libmachine: (ha-610874-m03)     <bootmenu enable='no'/>
	I1011 21:18:19.986723   29617 main.go:141] libmachine: (ha-610874-m03)   </os>
	I1011 21:18:19.986733   29617 main.go:141] libmachine: (ha-610874-m03)   <devices>
	I1011 21:18:19.986743   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='cdrom'>
	I1011 21:18:19.986759   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/boot2docker.iso'/>
	I1011 21:18:19.986773   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hdc' bus='scsi'/>
	I1011 21:18:19.986784   29617 main.go:141] libmachine: (ha-610874-m03)       <readonly/>
	I1011 21:18:19.986793   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986804   29617 main.go:141] libmachine: (ha-610874-m03)     <disk type='file' device='disk'>
	I1011 21:18:19.986816   29617 main.go:141] libmachine: (ha-610874-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 21:18:19.986831   29617 main.go:141] libmachine: (ha-610874-m03)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/ha-610874-m03.rawdisk'/>
	I1011 21:18:19.986840   29617 main.go:141] libmachine: (ha-610874-m03)       <target dev='hda' bus='virtio'/>
	I1011 21:18:19.986871   29617 main.go:141] libmachine: (ha-610874-m03)     </disk>
	I1011 21:18:19.986898   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986911   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='mk-ha-610874'/>
	I1011 21:18:19.986922   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986933   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986941   29617 main.go:141] libmachine: (ha-610874-m03)     <interface type='network'>
	I1011 21:18:19.986948   29617 main.go:141] libmachine: (ha-610874-m03)       <source network='default'/>
	I1011 21:18:19.986962   29617 main.go:141] libmachine: (ha-610874-m03)       <model type='virtio'/>
	I1011 21:18:19.986972   29617 main.go:141] libmachine: (ha-610874-m03)     </interface>
	I1011 21:18:19.986987   29617 main.go:141] libmachine: (ha-610874-m03)     <serial type='pty'>
	I1011 21:18:19.986999   29617 main.go:141] libmachine: (ha-610874-m03)       <target port='0'/>
	I1011 21:18:19.987006   29617 main.go:141] libmachine: (ha-610874-m03)     </serial>
	I1011 21:18:19.987015   29617 main.go:141] libmachine: (ha-610874-m03)     <console type='pty'>
	I1011 21:18:19.987025   29617 main.go:141] libmachine: (ha-610874-m03)       <target type='serial' port='0'/>
	I1011 21:18:19.987033   29617 main.go:141] libmachine: (ha-610874-m03)     </console>
	I1011 21:18:19.987052   29617 main.go:141] libmachine: (ha-610874-m03)     <rng model='virtio'>
	I1011 21:18:19.987060   29617 main.go:141] libmachine: (ha-610874-m03)       <backend model='random'>/dev/random</backend>
	I1011 21:18:19.987068   29617 main.go:141] libmachine: (ha-610874-m03)     </rng>
	I1011 21:18:19.987076   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987087   29617 main.go:141] libmachine: (ha-610874-m03)     
	I1011 21:18:19.987095   29617 main.go:141] libmachine: (ha-610874-m03)   </devices>
	I1011 21:18:19.987107   29617 main.go:141] libmachine: (ha-610874-m03) </domain>
	I1011 21:18:19.987120   29617 main.go:141] libmachine: (ha-610874-m03) 
	I1011 21:18:19.993869   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:ec:a1:8a in network default
	I1011 21:18:19.994634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:19.994661   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring networks are active...
	I1011 21:18:19.995468   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network default is active
	I1011 21:18:19.995798   29617 main.go:141] libmachine: (ha-610874-m03) Ensuring network mk-ha-610874 is active
	I1011 21:18:19.996173   29617 main.go:141] libmachine: (ha-610874-m03) Getting domain xml...
	I1011 21:18:19.996928   29617 main.go:141] libmachine: (ha-610874-m03) Creating domain...
	I1011 21:18:21.254226   29617 main.go:141] libmachine: (ha-610874-m03) Waiting to get IP...
	I1011 21:18:21.254939   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.255287   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.255333   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.255277   30793 retry.go:31] will retry after 299.921958ms: waiting for machine to come up
	I1011 21:18:21.557116   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.557606   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.557634   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.557554   30793 retry.go:31] will retry after 286.000289ms: waiting for machine to come up
	I1011 21:18:21.844948   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:21.845467   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:21.845490   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:21.845417   30793 retry.go:31] will retry after 387.119662ms: waiting for machine to come up
	I1011 21:18:22.233861   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.234347   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.234371   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.234316   30793 retry.go:31] will retry after 432.218769ms: waiting for machine to come up
	I1011 21:18:22.667570   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:22.668013   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:22.668044   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:22.667960   30793 retry.go:31] will retry after 681.692732ms: waiting for machine to come up
	I1011 21:18:23.350671   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:23.351087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:23.351114   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:23.351059   30793 retry.go:31] will retry after 838.189989ms: waiting for machine to come up
	I1011 21:18:24.191008   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:24.191479   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:24.191510   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:24.191434   30793 retry.go:31] will retry after 815.751815ms: waiting for machine to come up
	I1011 21:18:25.008738   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:25.009063   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:25.009087   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:25.009033   30793 retry.go:31] will retry after 1.238801147s: waiting for machine to come up
	I1011 21:18:26.249732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:26.250130   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:26.250160   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:26.250077   30793 retry.go:31] will retry after 1.384996284s: waiting for machine to come up
	I1011 21:18:27.636107   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:27.636581   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:27.636616   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:27.636560   30793 retry.go:31] will retry after 2.228451179s: waiting for machine to come up
	I1011 21:18:29.866214   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:29.866564   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:29.866592   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:29.866517   30793 retry.go:31] will retry after 2.670642081s: waiting for machine to come up
	I1011 21:18:32.539631   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:32.539928   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:32.539955   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:32.539912   30793 retry.go:31] will retry after 2.348031686s: waiting for machine to come up
	I1011 21:18:34.889816   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:34.890238   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:34.890284   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:34.890163   30793 retry.go:31] will retry after 4.066011924s: waiting for machine to come up
	I1011 21:18:38.960327   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:38.960729   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find current IP address of domain ha-610874-m03 in network mk-ha-610874
	I1011 21:18:38.960754   29617 main.go:141] libmachine: (ha-610874-m03) DBG | I1011 21:18:38.960678   30793 retry.go:31] will retry after 5.543915191s: waiting for machine to come up
	I1011 21:18:44.509752   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510179   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has current primary IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.510202   29617 main.go:141] libmachine: (ha-610874-m03) Found IP for machine: 192.168.39.222
	I1011 21:18:44.510223   29617 main.go:141] libmachine: (ha-610874-m03) Reserving static IP address...
	I1011 21:18:44.510657   29617 main.go:141] libmachine: (ha-610874-m03) DBG | unable to find host DHCP lease matching {name: "ha-610874-m03", mac: "52:54:00:54:11:ff", ip: "192.168.39.222"} in network mk-ha-610874
	I1011 21:18:44.581123   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Getting to WaitForSSH function...
	I1011 21:18:44.581152   29617 main.go:141] libmachine: (ha-610874-m03) Reserved static IP address: 192.168.39.222
	I1011 21:18:44.581189   29617 main.go:141] libmachine: (ha-610874-m03) Waiting for SSH to be available...
	I1011 21:18:44.584495   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585006   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.585034   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.585216   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH client type: external
	I1011 21:18:44.585245   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa (-rw-------)
	I1011 21:18:44.585269   29617 main.go:141] libmachine: (ha-610874-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 21:18:44.585288   29617 main.go:141] libmachine: (ha-610874-m03) DBG | About to run SSH command:
	I1011 21:18:44.585303   29617 main.go:141] libmachine: (ha-610874-m03) DBG | exit 0
	I1011 21:18:44.714704   29617 main.go:141] libmachine: (ha-610874-m03) DBG | SSH cmd err, output: <nil>: 
	I1011 21:18:44.714970   29617 main.go:141] libmachine: (ha-610874-m03) KVM machine creation complete!
	I1011 21:18:44.715289   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:44.715822   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.715996   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:44.716157   29617 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 21:18:44.716172   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetState
	I1011 21:18:44.717356   29617 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 21:18:44.717371   29617 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 21:18:44.717376   29617 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 21:18:44.717382   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.719703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.719994   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.720030   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.720182   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.720357   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720507   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.720609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.720910   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.721104   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.721116   29617 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 21:18:44.833939   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:44.833957   29617 main.go:141] libmachine: Detecting the provisioner...
	I1011 21:18:44.833964   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.836658   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837043   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.837069   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.837281   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.837454   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837581   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.837720   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.837855   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.838048   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.838063   29617 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 21:18:44.951348   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 21:18:44.951417   29617 main.go:141] libmachine: found compatible host: buildroot
	I1011 21:18:44.951426   29617 main.go:141] libmachine: Provisioning with buildroot...
	I1011 21:18:44.951433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951662   29617 buildroot.go:166] provisioning hostname "ha-610874-m03"
	I1011 21:18:44.951688   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:44.951865   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:44.954732   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955115   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:44.955139   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:44.955310   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:44.955477   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:44.955769   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:44.955914   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:44.956070   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:44.956081   29617 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874-m03 && echo "ha-610874-m03" | sudo tee /etc/hostname
	I1011 21:18:45.085832   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874-m03
	
	I1011 21:18:45.085866   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.088705   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089140   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.089165   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.089355   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.089596   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089767   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.089921   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.090058   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.090210   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.090224   29617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:18:45.213456   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:18:45.213485   29617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:18:45.213503   29617 buildroot.go:174] setting up certificates
	I1011 21:18:45.213511   29617 provision.go:84] configureAuth start
	I1011 21:18:45.213520   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetMachineName
	I1011 21:18:45.213850   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.216516   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.216909   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.216945   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.217058   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.219374   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219692   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.219725   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.219870   29617 provision.go:143] copyHostCerts
	I1011 21:18:45.219895   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.219927   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:18:45.219936   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:18:45.220002   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:18:45.220073   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220091   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:18:45.220098   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:18:45.220120   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:18:45.220162   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220179   29617 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:18:45.220186   29617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:18:45.220212   29617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:18:45.220261   29617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874-m03 san=[127.0.0.1 192.168.39.222 ha-610874-m03 localhost minikube]
	I1011 21:18:45.381567   29617 provision.go:177] copyRemoteCerts
	I1011 21:18:45.381648   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:18:45.381676   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.384744   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385058   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.385090   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.385241   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.385433   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.385594   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.385733   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.474156   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:18:45.474223   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:18:45.499839   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:18:45.499913   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 21:18:45.523935   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:18:45.524000   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:18:45.546732   29617 provision.go:87] duration metric: took 333.208457ms to configureAuth
	I1011 21:18:45.546761   29617 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:18:45.546986   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:45.547077   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.549423   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549746   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.549774   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.549963   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.550145   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550309   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.550436   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.550559   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.550750   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.550765   29617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:18:45.793129   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:18:45.793158   29617 main.go:141] libmachine: Checking connection to Docker...
	I1011 21:18:45.793166   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetURL
	I1011 21:18:45.794426   29617 main.go:141] libmachine: (ha-610874-m03) DBG | Using libvirt version 6000000
	I1011 21:18:45.796703   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797072   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.797104   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.797300   29617 main.go:141] libmachine: Docker is up and running!
	I1011 21:18:45.797313   29617 main.go:141] libmachine: Reticulating splines...
	I1011 21:18:45.797320   29617 client.go:171] duration metric: took 26.137622442s to LocalClient.Create
	I1011 21:18:45.797348   29617 start.go:167] duration metric: took 26.137680612s to libmachine.API.Create "ha-610874"
	I1011 21:18:45.797358   29617 start.go:293] postStartSetup for "ha-610874-m03" (driver="kvm2")
	I1011 21:18:45.797373   29617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:18:45.797391   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:45.797597   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:18:45.797632   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.799512   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799830   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.799859   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.799989   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.800143   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.800296   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.800459   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:45.889596   29617 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:18:45.893814   29617 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:18:45.893840   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:18:45.893920   29617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:18:45.893992   29617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:18:45.894000   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:18:45.894078   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:18:45.903909   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:45.928066   29617 start.go:296] duration metric: took 130.695494ms for postStartSetup
	I1011 21:18:45.928125   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetConfigRaw
	I1011 21:18:45.928694   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:45.931370   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.931736   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.931757   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.932008   29617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:18:45.932227   29617 start.go:128] duration metric: took 26.290217466s to createHost
	I1011 21:18:45.932255   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:45.934599   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.934957   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:45.934980   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:45.935141   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:45.935302   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935450   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:45.935609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:45.935755   29617 main.go:141] libmachine: Using SSH client type: native
	I1011 21:18:45.935906   29617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1011 21:18:45.935915   29617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:18:46.051363   29617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728681526.030608830
	
	I1011 21:18:46.051382   29617 fix.go:216] guest clock: 1728681526.030608830
	I1011 21:18:46.051389   29617 fix.go:229] Guest: 2024-10-11 21:18:46.03060883 +0000 UTC Remote: 2024-10-11 21:18:45.932240932 +0000 UTC m=+149.654084325 (delta=98.367898ms)
	I1011 21:18:46.051403   29617 fix.go:200] guest clock delta is within tolerance: 98.367898ms
	I1011 21:18:46.051408   29617 start.go:83] releasing machines lock for "ha-610874-m03", held for 26.409503393s
	I1011 21:18:46.051425   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.051638   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:46.054103   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.054465   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.054484   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.056759   29617 out.go:177] * Found network options:
	I1011 21:18:46.058108   29617 out.go:177]   - NO_PROXY=192.168.39.10,192.168.39.11
	W1011 21:18:46.059377   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.059397   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.059412   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.059861   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060012   29617 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:18:46.060103   29617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:18:46.060140   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	W1011 21:18:46.060197   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	W1011 21:18:46.060218   29617 proxy.go:119] fail to check proxy env: Error ip not in block
	I1011 21:18:46.060273   29617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:18:46.060291   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:18:46.062781   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063134   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063156   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063177   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063332   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063533   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.063672   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.063695   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:46.063722   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:46.063809   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:18:46.063917   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.063937   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:18:46.064070   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:18:46.064193   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:18:46.315238   29617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:18:46.321537   29617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:18:46.321622   29617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:18:46.338777   29617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 21:18:46.338801   29617 start.go:495] detecting cgroup driver to use...
	I1011 21:18:46.338861   29617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:18:46.354279   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:18:46.367905   29617 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:18:46.367951   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:18:46.382395   29617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:18:46.395784   29617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:18:46.527698   29617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:18:46.689393   29617 docker.go:233] disabling docker service ...
	I1011 21:18:46.689462   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:18:46.704203   29617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:18:46.717422   29617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:18:46.835539   29617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:18:46.954100   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:18:46.969007   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:18:46.988391   29617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:18:46.988466   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:46.998736   29617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:18:46.998798   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.011000   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.020896   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.032139   29617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:18:47.042674   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.053148   29617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.070001   29617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:18:47.079898   29617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:18:47.089404   29617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 21:18:47.089464   29617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 21:18:47.101955   29617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:18:47.111372   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:47.225475   29617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:18:47.314226   29617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:18:47.314298   29617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:18:47.318974   29617 start.go:563] Will wait 60s for crictl version
	I1011 21:18:47.319034   29617 ssh_runner.go:195] Run: which crictl
	I1011 21:18:47.322683   29617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:18:47.363256   29617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:18:47.363346   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.390105   29617 ssh_runner.go:195] Run: crio --version
	I1011 21:18:47.420312   29617 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:18:47.421976   29617 out.go:177]   - env NO_PROXY=192.168.39.10
	I1011 21:18:47.423450   29617 out.go:177]   - env NO_PROXY=192.168.39.10,192.168.39.11
	I1011 21:18:47.424609   29617 main.go:141] libmachine: (ha-610874-m03) Calling .GetIP
	I1011 21:18:47.427015   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427408   29617 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:18:47.427435   29617 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:18:47.427580   29617 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:18:47.432290   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:47.445118   29617 mustload.go:65] Loading cluster: ha-610874
	I1011 21:18:47.445341   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:18:47.445588   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.445623   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.460772   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I1011 21:18:47.461253   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.461758   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.461778   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.462071   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.462258   29617 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:18:47.463800   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:47.464063   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:47.464094   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:47.478835   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1011 21:18:47.479190   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:47.479632   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:47.479653   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:47.479922   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:47.480090   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:47.480267   29617 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.222
	I1011 21:18:47.480276   29617 certs.go:194] generating shared ca certs ...
	I1011 21:18:47.480289   29617 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.480440   29617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:18:47.480492   29617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:18:47.480504   29617 certs.go:256] generating profile certs ...
	I1011 21:18:47.480599   29617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:18:47.480632   29617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda
	I1011 21:18:47.480651   29617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.222 192.168.39.254]
	I1011 21:18:47.766344   29617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda ...
	I1011 21:18:47.766372   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda: {Name:mk781938e611c805d4d3614e2a3753b43a334879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766558   29617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda ...
	I1011 21:18:47.766576   29617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda: {Name:mk730a6176bc0314778375ee5435bf733e13e8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:18:47.766701   29617 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:18:47.766854   29617 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.559e7cda -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:18:47.767020   29617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:18:47.767039   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:18:47.767069   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:18:47.767088   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:18:47.767105   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:18:47.767122   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:18:47.767138   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:18:47.767155   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:18:47.790727   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:18:47.790840   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:18:47.790890   29617 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:18:47.790900   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:18:47.790934   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:18:47.790968   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:18:47.791002   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:18:47.791046   29617 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:18:47.791074   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:18:47.791090   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:47.791103   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:18:47.791139   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:47.794048   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794490   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:47.794521   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:47.794666   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:47.794865   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:47.795021   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:47.795166   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:47.874924   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1011 21:18:47.879896   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1011 21:18:47.890508   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1011 21:18:47.894884   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1011 21:18:47.906444   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1011 21:18:47.911071   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1011 21:18:47.924640   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1011 21:18:47.929130   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1011 21:18:47.939543   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1011 21:18:47.943420   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1011 21:18:47.952418   29617 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1011 21:18:47.956156   29617 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1011 21:18:47.965542   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:18:47.990672   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:18:48.018655   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:18:48.046638   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:18:48.075087   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1011 21:18:48.099261   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1011 21:18:48.125316   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:18:48.150810   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:18:48.176240   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:18:48.202437   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:18:48.228304   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:18:48.250733   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1011 21:18:48.267330   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1011 21:18:48.284282   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1011 21:18:48.300414   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1011 21:18:48.317312   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1011 21:18:48.334266   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1011 21:18:48.350540   29617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1011 21:18:48.366454   29617 ssh_runner.go:195] Run: openssl version
	I1011 21:18:48.371903   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:18:48.382259   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386521   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.386558   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:18:48.392096   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:18:48.402476   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:18:48.414951   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420157   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.420212   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:18:48.426147   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:18:48.437228   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:18:48.447706   29617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452447   29617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.452490   29617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:18:48.457944   29617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:18:48.469558   29617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:18:48.473684   29617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:18:48.473727   29617 kubeadm.go:934] updating node {m03 192.168.39.222 8443 v1.31.1 crio true true} ...
	I1011 21:18:48.473800   29617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:18:48.473821   29617 kube-vip.go:115] generating kube-vip config ...
	I1011 21:18:48.473848   29617 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:18:48.489435   29617 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:18:48.489512   29617 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:18:48.489571   29617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.499111   29617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1011 21:18:48.499166   29617 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1011 21:18:48.509157   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1011 21:18:48.509200   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509211   29617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1011 21:18:48.509233   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509250   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1011 21:18:48.509288   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1011 21:18:48.509215   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:18:48.517849   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1011 21:18:48.517877   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1011 21:18:48.530466   29617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.530534   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1011 21:18:48.530551   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1011 21:18:48.530575   29617 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1011 21:18:48.584347   29617 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1011 21:18:48.584388   29617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1011 21:18:49.359545   29617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1011 21:18:49.369067   29617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1011 21:18:49.386375   29617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:18:49.402697   29617 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:18:49.419546   29617 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:18:49.424269   29617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:18:49.437035   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:18:49.561710   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:18:49.579907   29617 host.go:66] Checking if "ha-610874" exists ...
	I1011 21:18:49.580262   29617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:18:49.580306   29617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:18:49.596329   29617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I1011 21:18:49.596782   29617 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:18:49.597244   29617 main.go:141] libmachine: Using API Version  1
	I1011 21:18:49.597267   29617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:18:49.597574   29617 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:18:49.597761   29617 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:18:49.597902   29617 start.go:317] joinCluster: &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:18:49.598045   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1011 21:18:49.598061   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:18:49.601098   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601584   29617 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:18:49.601613   29617 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:18:49.601735   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:18:49.601902   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:18:49.602044   29617 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:18:49.602182   29617 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:18:49.765636   29617 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:18:49.765692   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I1011 21:19:12.027662   29617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq2brj.in6y1t565nh7eze9 --discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-610874-m03 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (22.261919257s)
	I1011 21:19:12.027723   29617 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1011 21:19:12.601287   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-610874-m03 minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=ha-610874 minikube.k8s.io/primary=false
	I1011 21:19:12.730357   29617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-610874-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1011 21:19:12.852046   29617 start.go:319] duration metric: took 23.254138834s to joinCluster
	I1011 21:19:12.852173   29617 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 21:19:12.852553   29617 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:19:12.853928   29617 out.go:177] * Verifying Kubernetes components...
	I1011 21:19:12.855524   29617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:19:13.141318   29617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:19:13.175499   29617 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:19:13.175739   29617 kapi.go:59] client config for ha-610874: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1011 21:19:13.175813   29617 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.10:8443
	I1011 21:19:13.176040   29617 node_ready.go:35] waiting up to 6m0s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:13.176203   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.176216   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.176230   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.176236   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.180062   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:13.676530   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:13.676550   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:13.676559   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:13.676563   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:13.680629   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.176763   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.176790   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.176802   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.176813   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.181595   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:14.676942   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:14.676962   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:14.676971   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:14.676974   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:14.680092   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.177198   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.177232   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.177243   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.177251   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.181013   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:15.181507   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:15.676949   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:15.676975   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:15.676985   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:15.676991   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:15.680404   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.176381   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.176401   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.176411   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.176416   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.179611   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:16.676230   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:16.676253   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:16.676264   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:16.676269   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:16.679007   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.176965   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.176991   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.177003   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.177010   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.179578   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:17.677212   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:17.677239   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:17.677250   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:17.677257   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:17.680848   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:17.681529   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:18.176617   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.176642   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.176652   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.176657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.180501   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:18.676324   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:18.676344   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:18.676352   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:18.676356   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:18.680172   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:19.176785   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.176805   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.176813   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.176817   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.180917   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:19.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:19.676229   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:19.676239   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:19.676247   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:19.679537   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:20.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.176578   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.176586   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.176590   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.180852   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:20.181655   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:20.676981   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:20.677001   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:20.677010   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:20.677013   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:20.680773   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.176665   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.176687   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.176695   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.176698   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.180326   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:21.677105   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:21.677131   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:21.677143   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:21.677150   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:21.680523   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:22.176275   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.176296   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.176305   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.176311   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.180665   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:22.181892   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:22.677209   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:22.677234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:22.677254   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:22.677260   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:22.680867   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.177040   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.177059   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.177067   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.177072   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.180354   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:23.676494   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:23.676523   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:23.676533   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:23.676539   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:23.679890   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.177143   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.177165   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.177172   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.177178   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.181118   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:24.182010   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:24.677149   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:24.677167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:24.677176   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:24.677179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:24.681310   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.176839   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.176861   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.176869   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.176875   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.181361   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:25.676206   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:25.676226   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:25.676235   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:25.676238   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:25.679734   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.176896   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.176927   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.176938   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.176942   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.180665   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.676529   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:26.676556   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:26.676567   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:26.676574   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:26.679852   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:26.680538   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:27.176980   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.177000   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.177008   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.177011   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.180641   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:27.676837   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:27.676865   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:27.676876   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:27.676883   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:27.680097   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.177112   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.177134   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.177145   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.177152   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.180461   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.676318   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:28.676339   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:28.676347   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:28.676351   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:28.680275   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:28.680843   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:29.176557   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.176576   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.176584   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.176589   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.180006   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:29.676572   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:29.676591   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:29.676601   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:29.676608   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:29.679885   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.176623   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.176647   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.176655   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.176660   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.180360   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:30.676414   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:30.676442   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:30.676454   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:30.676462   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:30.679795   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.176596   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.176622   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.176632   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.176638   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.180174   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:31.180775   29617 node_ready.go:53] node "ha-610874-m03" has status "Ready":"False"
	I1011 21:19:31.676625   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:31.676645   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:31.676653   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:31.676657   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:31.679755   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.176832   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.176853   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.176861   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.176866   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.180709   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:32.676943   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:32.676966   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:32.676975   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:32.676979   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:32.680453   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.176289   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.176309   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.176317   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.176323   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179239   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:33.179746   29617 node_ready.go:49] node "ha-610874-m03" has status "Ready":"True"
	I1011 21:19:33.179763   29617 node_ready.go:38] duration metric: took 20.003708199s for node "ha-610874-m03" to be "Ready" ...
	I1011 21:19:33.179771   29617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:33.179838   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:33.179846   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.179852   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.179856   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.189958   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.199406   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.199502   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bhkxl
	I1011 21:19:33.199514   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.199523   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.199531   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.209887   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.210687   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.210702   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.210713   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.210717   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.217280   29617 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1011 21:19:33.217765   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.217784   29617 pod_ready.go:82] duration metric: took 18.353705ms for pod "coredns-7c65d6cfc9-bhkxl" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217795   29617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.217867   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xdhdb
	I1011 21:19:33.217877   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.217887   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.217892   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.223080   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:33.223812   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.223824   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.223831   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.223835   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.230872   29617 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1011 21:19:33.231311   29617 pod_ready.go:93] pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.231329   29617 pod_ready.go:82] duration metric: took 13.526998ms for pod "coredns-7c65d6cfc9-xdhdb" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231340   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.231407   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874
	I1011 21:19:33.231416   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.231425   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.231433   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.241511   29617 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1011 21:19:33.242134   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.242152   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.242161   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.242167   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.246996   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.247556   29617 pod_ready.go:93] pod "etcd-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.247579   29617 pod_ready.go:82] duration metric: took 16.22432ms for pod "etcd-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247588   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.247649   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m02
	I1011 21:19:33.247658   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.247665   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.247671   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.251040   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.251793   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:33.251812   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.251824   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.251833   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.256535   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:33.256972   29617 pod_ready.go:93] pod "etcd-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.256988   29617 pod_ready.go:82] duration metric: took 9.394627ms for pod "etcd-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.256997   29617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.377135   29617 request.go:632] Waited for 120.080186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377222   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/etcd-ha-610874-m03
	I1011 21:19:33.377234   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.377244   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.377255   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.380444   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.576460   29617 request.go:632] Waited for 195.298391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576523   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:33.576531   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.576540   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.576546   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.579942   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.580389   29617 pod_ready.go:93] pod "etcd-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.580410   29617 pod_ready.go:82] duration metric: took 323.407782ms for pod "etcd-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.580426   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.776719   29617 request.go:632] Waited for 196.227093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776796   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874
	I1011 21:19:33.776801   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.776812   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.776819   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.780183   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.977331   29617 request.go:632] Waited for 196.373167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977390   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:33.977397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:33.977408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:33.977414   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:33.980667   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:33.981324   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:33.981341   29617 pod_ready.go:82] duration metric: took 400.908426ms for pod "kube-apiserver-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:33.981356   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.176801   29617 request.go:632] Waited for 195.389419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176872   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m02
	I1011 21:19:34.176878   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.176886   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.176893   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.180626   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.376945   29617 request.go:632] Waited for 195.362412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377024   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:34.377032   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.377039   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.377045   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.380705   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.381593   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.381610   29617 pod_ready.go:82] duration metric: took 400.248016ms for pod "kube-apiserver-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.381621   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.576685   29617 request.go:632] Waited for 195.00587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576774   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-610874-m03
	I1011 21:19:34.576785   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.576796   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.576812   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.580220   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:34.776845   29617 request.go:632] Waited for 195.742935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776934   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:34.776946   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.776957   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.776965   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.781975   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:34.782910   29617 pod_ready.go:93] pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:34.782934   29617 pod_ready.go:82] duration metric: took 401.305343ms for pod "kube-apiserver-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.782947   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:34.976878   29617 request.go:632] Waited for 193.849735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976930   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874
	I1011 21:19:34.976935   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:34.976942   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:34.976951   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:34.980959   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.176307   29617 request.go:632] Waited for 194.592291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176377   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:35.176382   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.176391   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.176396   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.180046   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.180744   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.180763   29617 pod_ready.go:82] duration metric: took 397.808243ms for pod "kube-controller-manager-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.180772   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.376823   29617 request.go:632] Waited for 195.970884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376892   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m02
	I1011 21:19:35.376904   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.376914   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.376920   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.380896   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.577025   29617 request.go:632] Waited for 195.339459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577098   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:35.577106   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.577113   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.577121   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.580479   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.581020   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.581044   29617 pod_ready.go:82] duration metric: took 400.264515ms for pod "kube-controller-manager-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.581060   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.777008   29617 request.go:632] Waited for 195.878722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777069   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-610874-m03
	I1011 21:19:35.777082   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.777104   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.777112   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.780597   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.976851   29617 request.go:632] Waited for 195.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976920   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:35.976925   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:35.976934   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:35.976956   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:35.980563   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:35.981007   29617 pod_ready.go:93] pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:35.981026   29617 pod_ready.go:82] duration metric: took 399.955573ms for pod "kube-controller-manager-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:35.981036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.177077   29617 request.go:632] Waited for 195.967969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177157   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bj7p
	I1011 21:19:36.177162   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.177169   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.177174   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.181463   29617 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1011 21:19:36.376692   29617 request.go:632] Waited for 194.268817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376745   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:36.376750   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.376757   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.376762   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.379384   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.379856   29617 pod_ready.go:93] pod "kube-proxy-4bj7p" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.379878   29617 pod_ready.go:82] duration metric: took 398.835564ms for pod "kube-proxy-4bj7p" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.379892   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.577313   29617 request.go:632] Waited for 197.342873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577431   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tqhn
	I1011 21:19:36.577448   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.577456   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.577460   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.580412   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:36.776616   29617 request.go:632] Waited for 195.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776706   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:36.776717   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.776728   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.776737   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.779960   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:36.780383   29617 pod_ready.go:93] pod "kube-proxy-4tqhn" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:36.780400   29617 pod_ready.go:82] duration metric: took 400.499984ms for pod "kube-proxy-4tqhn" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.780412   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:36.976358   29617 request.go:632] Waited for 195.870601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976432   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cwzw4
	I1011 21:19:36.976449   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:36.976465   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:36.976472   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:36.979995   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.177111   29617 request.go:632] Waited for 196.357808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177162   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:37.177167   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.177174   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.177179   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.180267   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.181011   29617 pod_ready.go:93] pod "kube-proxy-cwzw4" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.181027   29617 pod_ready.go:82] duration metric: took 400.605186ms for pod "kube-proxy-cwzw4" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.181036   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.377210   29617 request.go:632] Waited for 196.081343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377264   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874
	I1011 21:19:37.377271   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.377281   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.377290   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.380963   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.577326   29617 request.go:632] Waited for 195.76133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577389   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874
	I1011 21:19:37.577397   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.577404   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.577408   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.580712   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.581178   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.581195   29617 pod_ready.go:82] duration metric: took 400.154079ms for pod "kube-scheduler-ha-610874" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.581207   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.776648   29617 request.go:632] Waited for 195.355762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776752   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m02
	I1011 21:19:37.776766   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.776778   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.776782   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.779689   29617 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1011 21:19:37.976673   29617 request.go:632] Waited for 196.375961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976747   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m02
	I1011 21:19:37.976758   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:37.976880   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:37.976898   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:37.980426   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:37.981073   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:37.981096   29617 pod_ready.go:82] duration metric: took 399.882141ms for pod "kube-scheduler-ha-610874-m02" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:37.981108   29617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.177223   29617 request.go:632] Waited for 196.014293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177283   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-610874-m03
	I1011 21:19:38.177288   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.177296   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.177301   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.181281   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.376306   29617 request.go:632] Waited for 194.28038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376394   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes/ha-610874-m03
	I1011 21:19:38.376403   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.376412   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.376419   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.379547   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.380029   29617 pod_ready.go:93] pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace has status "Ready":"True"
	I1011 21:19:38.380048   29617 pod_ready.go:82] duration metric: took 398.929633ms for pod "kube-scheduler-ha-610874-m03" in "kube-system" namespace to be "Ready" ...
	I1011 21:19:38.380058   29617 pod_ready.go:39] duration metric: took 5.200277623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:19:38.380084   29617 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:19:38.380134   29617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:19:38.400400   29617 api_server.go:72] duration metric: took 25.548169639s to wait for apiserver process to appear ...
	I1011 21:19:38.400421   29617 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:19:38.400455   29617 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1011 21:19:38.404896   29617 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1011 21:19:38.404960   29617 round_trippers.go:463] GET https://192.168.39.10:8443/version
	I1011 21:19:38.404973   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.404983   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.404989   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.405751   29617 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1011 21:19:38.405814   29617 api_server.go:141] control plane version: v1.31.1
	I1011 21:19:38.405829   29617 api_server.go:131] duration metric: took 5.403218ms to wait for apiserver health ...
	I1011 21:19:38.405839   29617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 21:19:38.577234   29617 request.go:632] Waited for 171.320057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577302   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.577307   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.577315   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.577319   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.583229   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.592399   29617 system_pods.go:59] 24 kube-system pods found
	I1011 21:19:38.592431   29617 system_pods.go:61] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.592436   29617 system_pods.go:61] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.592439   29617 system_pods.go:61] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.592442   29617 system_pods.go:61] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.592445   29617 system_pods.go:61] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.592448   29617 system_pods.go:61] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.592452   29617 system_pods.go:61] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.592455   29617 system_pods.go:61] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.592458   29617 system_pods.go:61] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.592461   29617 system_pods.go:61] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.592465   29617 system_pods.go:61] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.592468   29617 system_pods.go:61] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.592474   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.592477   29617 system_pods.go:61] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.592480   29617 system_pods.go:61] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.592484   29617 system_pods.go:61] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.592488   29617 system_pods.go:61] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.592493   29617 system_pods.go:61] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.592496   29617 system_pods.go:61] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.592499   29617 system_pods.go:61] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.592502   29617 system_pods.go:61] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.592511   29617 system_pods.go:61] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.592517   29617 system_pods.go:61] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.592521   29617 system_pods.go:61] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.592525   29617 system_pods.go:74] duration metric: took 186.682269ms to wait for pod list to return data ...
	I1011 21:19:38.592532   29617 default_sa.go:34] waiting for default service account to be created ...
	I1011 21:19:38.776788   29617 request.go:632] Waited for 184.17903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776850   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/default/serviceaccounts
	I1011 21:19:38.776857   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.776867   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.776874   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.780634   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:38.780764   29617 default_sa.go:45] found service account: "default"
	I1011 21:19:38.780782   29617 default_sa.go:55] duration metric: took 188.241369ms for default service account to be created ...
	I1011 21:19:38.780791   29617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 21:19:38.977229   29617 request.go:632] Waited for 196.374035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977314   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/namespaces/kube-system/pods
	I1011 21:19:38.977326   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:38.977333   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:38.977339   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:38.983305   29617 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1011 21:19:38.990701   29617 system_pods.go:86] 24 kube-system pods found
	I1011 21:19:38.990734   29617 system_pods.go:89] "coredns-7c65d6cfc9-bhkxl" [dff1cc79-8272-43f5-b525-b12e913c499d] Running
	I1011 21:19:38.990743   29617 system_pods.go:89] "coredns-7c65d6cfc9-xdhdb" [dd476307-f596-44e8-b651-095399bb6ce9] Running
	I1011 21:19:38.990750   29617 system_pods.go:89] "etcd-ha-610874" [6cd70b48-09bb-4280-8bdc-413f124d1d02] Running
	I1011 21:19:38.990756   29617 system_pods.go:89] "etcd-ha-610874-m02" [11683051-9947-42d9-92a7-ba5a2e51345a] Running
	I1011 21:19:38.990762   29617 system_pods.go:89] "etcd-ha-610874-m03" [a81d3d3e-a468-4c60-9e36-a542f7112755] Running
	I1011 21:19:38.990769   29617 system_pods.go:89] "kindnet-2c774" [dc55cd3b-0cd7-4d47-88ac-2a5936585e41] Running
	I1011 21:19:38.990775   29617 system_pods.go:89] "kindnet-pd7rn" [cc21667d-addd-48a2-a01e-97de30495101] Running
	I1011 21:19:38.990782   29617 system_pods.go:89] "kindnet-xs5m6" [22248576-e5e5-455d-87f4-c4d51effcfca] Running
	I1011 21:19:38.990790   29617 system_pods.go:89] "kube-apiserver-ha-610874" [6dcfe7a3-0e7d-4f20-bdbf-645d8e0d4466] Running
	I1011 21:19:38.990800   29617 system_pods.go:89] "kube-apiserver-ha-610874-m02" [b1c38251-ef09-43dd-8787-a0fa8823e33b] Running
	I1011 21:19:38.990808   29617 system_pods.go:89] "kube-apiserver-ha-610874-m03" [18106dfd-4932-4f5f-975b-cfae68b818ac] Running
	I1011 21:19:38.990818   29617 system_pods.go:89] "kube-controller-manager-ha-610874" [8ae5b847-3699-44d1-8b49-8dcbc8ace6eb] Running
	I1011 21:19:38.990826   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m02" [5471c5e5-a528-4ac9-a32f-b67da82e9fcd] Running
	I1011 21:19:38.990835   29617 system_pods.go:89] "kube-controller-manager-ha-610874-m03" [b98535aa-0e68-4302-b7ab-37453af6b7cf] Running
	I1011 21:19:38.990842   29617 system_pods.go:89] "kube-proxy-4bj7p" [2d78dd8b-7200-497c-9abd-09b6f4484718] Running
	I1011 21:19:38.990849   29617 system_pods.go:89] "kube-proxy-4tqhn" [960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c] Running
	I1011 21:19:38.990856   29617 system_pods.go:89] "kube-proxy-cwzw4" [515460dc-02dd-42a1-b093-c300c06979d4] Running
	I1011 21:19:38.990866   29617 system_pods.go:89] "kube-scheduler-ha-610874" [0457b316-b7f3-471f-b9e3-0a482d0437be] Running
	I1011 21:19:38.990873   29617 system_pods.go:89] "kube-scheduler-ha-610874-m02" [54a7acf7-8a8f-411c-bbd4-f22946919e6e] Running
	I1011 21:19:38.990880   29617 system_pods.go:89] "kube-scheduler-ha-610874-m03" [fd812ce2-bf14-405a-a0d3-02b267a3e6e5] Running
	I1011 21:19:38.990889   29617 system_pods.go:89] "kube-vip-ha-610874" [8ec9e25c-4f56-4d99-b54a-01e0ff0522b1] Running
	I1011 21:19:38.990896   29617 system_pods.go:89] "kube-vip-ha-610874-m02" [4dafe794-9256-46a3-866c-e3926a0153da] Running
	I1011 21:19:38.990903   29617 system_pods.go:89] "kube-vip-ha-610874-m03" [e3d56183-c8af-4ea0-a093-441ee0d965e1] Running
	I1011 21:19:38.990910   29617 system_pods.go:89] "storage-provisioner" [2066958b-e1eb-421a-939e-79e8ea7357e1] Running
	I1011 21:19:38.990922   29617 system_pods.go:126] duration metric: took 210.12433ms to wait for k8s-apps to be running ...
	I1011 21:19:38.990936   29617 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 21:19:38.991000   29617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:19:39.006368   29617 system_svc.go:56] duration metric: took 15.405995ms WaitForService to wait for kubelet
	I1011 21:19:39.006398   29617 kubeadm.go:582] duration metric: took 26.154169399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:19:39.006432   29617 node_conditions.go:102] verifying NodePressure condition ...
	I1011 21:19:39.177139   29617 request.go:632] Waited for 170.58768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177204   29617 round_trippers.go:463] GET https://192.168.39.10:8443/api/v1/nodes
	I1011 21:19:39.177210   29617 round_trippers.go:469] Request Headers:
	I1011 21:19:39.177218   29617 round_trippers.go:473]     Accept: application/json, */*
	I1011 21:19:39.177226   29617 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1011 21:19:39.180762   29617 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1011 21:19:39.182158   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182186   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182210   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182214   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182219   29617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 21:19:39.182222   29617 node_conditions.go:123] node cpu capacity is 2
	I1011 21:19:39.182225   29617 node_conditions.go:105] duration metric: took 175.788668ms to run NodePressure ...
	I1011 21:19:39.182235   29617 start.go:241] waiting for startup goroutines ...
	I1011 21:19:39.182261   29617 start.go:255] writing updated cluster config ...
	I1011 21:19:39.182594   29617 ssh_runner.go:195] Run: rm -f paused
	I1011 21:19:39.238354   29617 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:19:39.241534   29617 out.go:177] * Done! kubectl is now configured to use "ha-610874" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.460454937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c114da0-55f4-4f18-87e2-32831a0161d7 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.461903634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dc93e7f-daa5-40b1-9f94-81dbc9d0a093 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.462412806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681818462387246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dc93e7f-daa5-40b1-9f94-81dbc9d0a093 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.463309829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e56cb9f4-d8a5-45a6-9441-d5f05a9edf80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.463372908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e56cb9f4-d8a5-45a6-9441-d5f05a9edf80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.463589703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e56cb9f4-d8a5-45a6-9441-d5f05a9edf80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.501758749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2dd8559-37cc-418c-ad36-7b3fadb12b8d name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.501828729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2dd8559-37cc-418c-ad36-7b3fadb12b8d name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.502686202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abaf00ce-70db-4abd-852f-a4528a4d9b51 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.503111501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681818503091193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abaf00ce-70db-4abd-852f-a4528a4d9b51 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.503758504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07be06e3-5375-4949-9481-a5598373cb17 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.503835239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07be06e3-5375-4949-9481-a5598373cb17 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.504088419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07be06e3-5375-4949-9481-a5598373cb17 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.507081750Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8c4b2a64-e8fa-45de-9555-3e6648f39595 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.507498392Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-wdkxg,Uid:eba860b8-6c0f-433a-8fe0-a8fef6cb685b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681580504522131,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-11T21:19:40.179092319Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdhdb,Uid:dd476307-f596-44e8-b651-095399bb6ce9,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728681445043815876,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-11T21:17:23.236979424Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bhkxl,Uid:dff1cc79-8272-43f5-b525-b12e913c499d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681445034514057,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff1cc79-8272-43f5-b525-b12e913c499d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-10-11T21:17:23.227571519Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2066958b-e1eb-421a-939e-79e8ea7357e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681443541861734,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-11T21:17:23.232127315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&PodSandboxMetadata{Name:kindnet-pd7rn,Uid:cc21667d-addd-48a2-a01e-97de30495101,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681425422030922,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-11T21:17:03.604990394Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&PodSandboxMetadata{Name:kube-proxy-4tqhn,Uid:960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681425401902636,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-11T21:17:03.590704163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-610874,Uid:2a6c5660833ee4be1cedaebce01ddba3,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681412595485609,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a6c5660833ee4be1cedaebce01ddba3,kubernetes.io/config.seen: 2024-10-11T21:16:51.496444363Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-610874,Uid:0331c21fedf7c3c7df69bbc42aa336c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681412594510590,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c
1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.10:8443,kubernetes.io/config.hash: 0331c21fedf7c3c7df69bbc42aa336c1,kubernetes.io/config.seen: 2024-10-11T21:16:51.496442941Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-610874,Uid:2a418c327914f91342d92b51abad5f64,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681412592516430,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a418c327914f91342d92b51abad5f64,kubernetes.io/config.seen: 2024-10-11T21:16:51.496445576Z,kubernetes.io/config.source: file,},RuntimeHandler:,}
,&PodSandbox{Id:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&PodSandboxMetadata{Name:etcd-ha-610874,Uid:864bf00bf1b32b1846f24d8cd17e31fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681412565113890,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.10:2379,kubernetes.io/config.hash: 864bf00bf1b32b1846f24d8cd17e31fd,kubernetes.io/config.seen: 2024-10-11T21:16:51.496439059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-610874,Uid:ddc7fcae0779e224950f66ce4b0cf173,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728681412563845744,Labels:
map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{kubernetes.io/config.hash: ddc7fcae0779e224950f66ce4b0cf173,kubernetes.io/config.seen: 2024-10-11T21:16:51.496446426Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8c4b2a64-e8fa-45de-9555-3e6648f39595 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.508611855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97bc0e3d-f8b2-4c3b-a19e-2edeafddab8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.508684970Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97bc0e3d-f8b2-4c3b-a19e-2edeafddab8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.508952757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97bc0e3d-f8b2-4c3b-a19e-2edeafddab8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.543041486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e99b220-b312-489c-a96e-6c12f9b570e3 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.543127807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e99b220-b312-489c-a96e-6c12f9b570e3 name=/runtime.v1.RuntimeService/Version
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.544382176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f7aa123-1a31-4300-8212-93db35b539f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.544823473Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681818544802528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f7aa123-1a31-4300-8212-93db35b539f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.545256054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8982efb-ec31-4a77-a8ef-eb32f6082e2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.545324543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8982efb-ec31-4a77-a8ef-eb32f6082e2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 21:23:38 ha-610874 crio[662]: time="2024-10-11 21:23:38.545550326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a12e9c8cc5fc503cb3d088434daca6fa70557545de8f6271a14617011bb8e4fe,PodSandboxId:3d6c8146ac279f839a9722b1e519709adcf4c13266dcd24bb4be2843837fa5ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728681584515868695,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wdkxg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eba860b8-6c0f-433a-8fe0-a8fef6cb685b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6,PodSandboxId:8079f4949344c506352ce2e6017c5865cbcba0611fbb7b6aa734b3f8018848fd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445258071611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdhdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd476307-f596-44e8-b651-095399bb6ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb,PodSandboxId:bb1b1e2f66116e00a0b588459b06f61b10b934e36e569b0c996b6d3186666168,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728681445232871591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bhkxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dff1cc79-8272-43f5-b525-b12e913c499d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536,PodSandboxId:5b0253d201393151c16424c686dc1852fc6ed843b7773d31f33242ca9e613825,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728681443641452489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2066958b-e1eb-421a-939e-79e8ea7357e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952,PodSandboxId:bc055170688e11254b4e1b480fe275b7d4698f854b540f31ce5421a20ebe8ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728681431821685012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd7rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc21667d-addd-48a2-a01e-97de30495101,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b,PodSandboxId:9bb0d73fd8a6dc65e03f914718411b2e3acf63d65019ec914337764a5b1acde0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172868142
5602635911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tqhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960c7cfe-e1ab-401b-a4fa-2fa7e7ba2f4c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d,PodSandboxId:343b700a511ad4c723e763f5d6f205067bfda9ca664d803099b07bfaee6a534c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17286814159
50493049,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7fcae0779e224950f66ce4b0cf173,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94,PodSandboxId:089d2c0589273506db9c0b4cfe87742fc33a12ae5a92164e487abd3d9814e09f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728681412839763970,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a418c327914f91342d92b51abad5f64,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865,PodSandboxId:9a96e5f0cd28ab016d78a67db1cf17167b72a2cf22286acf851c3662db92f75a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728681412889527299,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a6c5660833ee4be1cedaebce01ddba3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948,PodSandboxId:6fbc98773bd420b0c0a9addd583ca8cb235d197be3f6399aad087b586f74adaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728681412823876431,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0331c21fedf7c3c7df69bbc42aa336c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a,PodSandboxId:65e184a9323648bf581759045d55e02c4e21f2a15d4a1261297d5d81dd9ec157,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728681412736888215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-610874,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864bf00bf1b32b1846f24d8cd17e31fd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8982efb-ec31-4a77-a8ef-eb32f6082e2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a12e9c8cc5fc5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3d6c8146ac279       busybox-7dff88458-wdkxg
	add7da026dcc4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8079f4949344c       coredns-7c65d6cfc9-xdhdb
	f6f7910716598       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   bb1b1e2f66116       coredns-7c65d6cfc9-bhkxl
	01564ba5bc1e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   5b0253d201393       storage-provisioner
	9d5b2015aad60       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   bc055170688e1       kindnet-pd7rn
	4af1bc183cfbe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   9bb0d73fd8a6d       kube-proxy-4tqhn
	7009deb3ff5ef       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   343b700a511ad       kube-vip-ha-610874
	1bb0907534c8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9a96e5f0cd28a       kube-controller-manager-ha-610874
	093fe14b91d96       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   089d2c0589273       kube-scheduler-ha-610874
	b6a994e3f4bd9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6fbc98773bd42       kube-apiserver-ha-610874
	1cf13112be94f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   65e184a932364       etcd-ha-610874
	
	
	==> coredns [add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6] <==
	[INFO] 10.244.1.2:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143766s
	[INFO] 10.244.1.2:38119 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142587s
	[INFO] 10.244.1.2:40246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.002793445s
	[INFO] 10.244.1.2:46273 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000207574s
	[INFO] 10.244.0.4:51515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133463s
	[INFO] 10.244.0.4:34555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773084s
	[INFO] 10.244.0.4:56190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010851s
	[INFO] 10.244.0.4:35324 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114943s
	[INFO] 10.244.0.4:37261 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075619s
	[INFO] 10.244.2.2:33936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100657s
	[INFO] 10.244.2.2:47182 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000246779s
	[INFO] 10.244.1.2:44485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167961s
	[INFO] 10.244.1.2:46483 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141019s
	[INFO] 10.244.1.2:55464 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121351s
	[INFO] 10.244.0.4:47194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117616s
	[INFO] 10.244.0.4:49523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148468s
	[INFO] 10.244.0.4:45932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127987s
	[INFO] 10.244.0.4:49317 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075167s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169352s
	[INFO] 10.244.2.2:33809 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014751s
	[INFO] 10.244.2.2:44485 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176967s
	[INFO] 10.244.1.2:48359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011299s
	[INFO] 10.244.0.4:56947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140437s
	[INFO] 10.244.0.4:57754 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075899s
	[INFO] 10.244.0.4:59528 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091718s
	
	
	==> coredns [f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb] <==
	[INFO] 127.0.0.1:48153 - 48750 "HINFO IN 7219889624523006915.8528053042981959638. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015325438s
	[INFO] 10.244.2.2:47536 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.017241259s
	[INFO] 10.244.2.2:38591 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013641236s
	[INFO] 10.244.1.2:49949 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001322762s
	[INFO] 10.244.1.2:43849 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00009337s
	[INFO] 10.244.0.4:40246 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000070768s
	[INFO] 10.244.0.4:45808 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00140468s
	[INFO] 10.244.2.2:36598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219913s
	[INFO] 10.244.2.2:59970 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164371s
	[INFO] 10.244.2.2:54785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130909s
	[INFO] 10.244.1.2:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791262s
	[INFO] 10.244.1.2:49139 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158826s
	[INFO] 10.244.1.2:59870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00130207s
	[INFO] 10.244.1.2:48112 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127361s
	[INFO] 10.244.0.4:37981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152222s
	[INFO] 10.244.0.4:40975 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001145115s
	[INFO] 10.244.0.4:46746 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060695s
	[INFO] 10.244.2.2:60221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111051s
	[INFO] 10.244.2.2:45949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000966s
	[INFO] 10.244.1.2:51845 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131185s
	[INFO] 10.244.2.2:49925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140614s
	[INFO] 10.244.1.2:40749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139491s
	[INFO] 10.244.1.2:40058 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192557s
	[INFO] 10.244.1.2:36253 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154213s
	[INFO] 10.244.0.4:54354 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127201s
	
	
	==> describe nodes <==
	Name:               ha-610874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T21_16_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:16:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:16:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:02 +0000   Fri, 11 Oct 2024 21:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    ha-610874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cfe54b8903d4e3899113202463cdd3d
	  System UUID:                0cfe54b8-903d-4e38-9911-3202463cdd3d
	  Boot ID:                    afa53331-2d72-4daf-aead-d3b59f60fb23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wdkxg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 coredns-7c65d6cfc9-bhkxl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m35s
	  kube-system                 coredns-7c65d6cfc9-xdhdb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m35s
	  kube-system                 etcd-ha-610874                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m39s
	  kube-system                 kindnet-pd7rn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-apiserver-ha-610874             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-controller-manager-ha-610874    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-proxy-4tqhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-scheduler-ha-610874             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-vip-ha-610874                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m32s  kube-proxy       
	  Normal  Starting                 6m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m39s  kubelet          Node ha-610874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s  kubelet          Node ha-610874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s  kubelet          Node ha-610874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m36s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  NodeReady                6m15s  kubelet          Node ha-610874 status is now: NodeReady
	  Normal  RegisteredNode           5m38s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	  Normal  RegisteredNode           4m20s  node-controller  Node ha-610874 event: Registered Node ha-610874 in Controller
	
	
	Name:               ha-610874-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_17_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:17:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:20:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 11 Oct 2024 21:19:55 +0000   Fri, 11 Oct 2024 21:21:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-610874-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5e48fde498443da85ce03c51747b961
	  System UUID:                e5e48fde-4984-43da-85ce-03c51747b961
	  Boot ID:                    bf2f6504-4406-4797-b6e1-dc754be8ce6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pwg8s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-610874-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m44s
	  kube-system                 kindnet-xs5m6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m46s
	  kube-system                 kube-apiserver-ha-610874-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-controller-manager-ha-610874-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-proxy-4bj7p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-scheduler-ha-610874-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-610874-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m42s                  kube-proxy       
	  Normal  RegisteredNode           5m46s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m46s (x8 over 5m46s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s (x8 over 5m46s)  kubelet          Node ha-610874-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x7 over 5m46s)  kubelet          Node ha-610874-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-610874-m02 event: Registered Node ha-610874-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-610874-m02 status is now: NodeNotReady
	
	
	Name:               ha-610874-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_19_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:10 +0000   Fri, 11 Oct 2024 21:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-610874-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1063a3d54d5d40c88a61db94380d3423
	  System UUID:                1063a3d5-4d5d-40c8-8a61-db94380d3423
	  Boot ID:                    ced9dc07-ccd1-4190-aae0-50f9a8bdae06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4sstr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-610874-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m27s
	  kube-system                 kindnet-2c774                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m29s
	  kube-system                 kube-apiserver-ha-610874-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-ha-610874-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-cwzw4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-ha-610874-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-vip-ha-610874-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m24s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m29s                  cidrAllocator    Node ha-610874-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m29s (x8 over 4m29s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s (x8 over 4m29s)  kubelet          Node ha-610874-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s (x7 over 4m29s)  kubelet          Node ha-610874-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-610874-m03 event: Registered Node ha-610874-m03 in Controller
	
	
	Name:               ha-610874-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-610874-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=ha-610874
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_11T21_20_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-610874-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:23:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:20:49 +0000   Fri, 11 Oct 2024 21:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-610874-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75d61525a70843b49a5efd4786a05869
	  System UUID:                75d61525-a708-43b4-9a5e-fd4786a05869
	  Boot ID:                    172ace10-e670-4373-a755-bb93871c28da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7dn76       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-vrd24    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m20s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m20s)  kubelet          Node ha-610874-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m20s)  kubelet          Node ha-610874-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m19s                  cidrAllocator    Node ha-610874-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-610874-m04 event: Registered Node ha-610874-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-610874-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct11 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.855992] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543327] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581790] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.580104] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056339] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.193419] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.137869] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293941] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.956728] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.562630] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.064485] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.508464] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.090437] kauditd_printk_skb: 79 callbacks suppressed
	[Oct11 21:17] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.436722] kauditd_printk_skb: 29 callbacks suppressed
	[ +46.213407] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a] <==
	{"level":"warn","ts":"2024-10-11T21:23:38.580779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.681761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.781101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.811071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.819159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.822594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.829950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.836341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.842874Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.845764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.848784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.856046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.865901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.871917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.874771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.877647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.882864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.883904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.889924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.896150Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.899260Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.902546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.909510Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.917612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-11T21:23:38.923159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f8926bd555ec3d0e","from":"f8926bd555ec3d0e","remote-peer-id":"75f7d6a6d827e320","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:23:39 up 7 min,  0 users,  load average: 0.31, 0.37, 0.20
	Linux ha-610874 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952] <==
	I1011 21:23:03.018292       1 main.go:300] handling current node
	I1011 21:23:13.008357       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:13.008403       1 main.go:300] handling current node
	I1011 21:23:13.008468       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:13.008474       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:13.008844       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:13.008922       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:13.009419       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:13.009448       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:23.017976       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:23.018143       1 main.go:300] handling current node
	I1011 21:23:23.018234       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:23.018259       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:23.018517       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:23.018551       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:23.018673       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:23.018695       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	I1011 21:23:33.008353       1 main.go:296] Handling node with IPs: map[192.168.39.10:{}]
	I1011 21:23:33.008439       1 main.go:300] handling current node
	I1011 21:23:33.008462       1 main.go:296] Handling node with IPs: map[192.168.39.11:{}]
	I1011 21:23:33.008474       1 main.go:323] Node ha-610874-m02 has CIDR [10.244.1.0/24] 
	I1011 21:23:33.008865       1 main.go:296] Handling node with IPs: map[192.168.39.222:{}]
	I1011 21:23:33.008902       1 main.go:323] Node ha-610874-m03 has CIDR [10.244.2.0/24] 
	I1011 21:23:33.009247       1 main.go:296] Handling node with IPs: map[192.168.39.87:{}]
	I1011 21:23:33.009277       1 main.go:323] Node ha-610874-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948] <==
	I1011 21:17:03.544827       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1011 21:17:03.633951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1011 21:17:53.070315       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.070829       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 84.644µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:17:53.072106       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.073324       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1011 21:17:53.074623       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.578549ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:10.074019       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.449µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1011 21:19:10.074013       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9bd8f8e8-8e91-4067-a12f-1ea2d8bd41c6"
	E1011 21:19:10.074068       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.809µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1011 21:19:45.881753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47690: use of closed network connection
	E1011 21:19:46.062184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47710: use of closed network connection
	E1011 21:19:46.253652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47728: use of closed network connection
	E1011 21:19:46.438494       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47750: use of closed network connection
	E1011 21:19:46.637537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47770: use of closed network connection
	E1011 21:19:46.815140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45184: use of closed network connection
	E1011 21:19:47.002661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45216: use of closed network connection
	E1011 21:19:47.179398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45236: use of closed network connection
	E1011 21:19:47.346528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45250: use of closed network connection
	E1011 21:19:47.638405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45264: use of closed network connection
	E1011 21:19:47.808669       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45288: use of closed network connection
	E1011 21:19:47.977304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45304: use of closed network connection
	E1011 21:19:48.152762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45326: use of closed network connection
	E1011 21:19:48.324710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45346: use of closed network connection
	E1011 21:19:48.491718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45354: use of closed network connection
	
	
	==> kube-controller-manager [1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865] <==
	I1011 21:20:18.968008       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-610874-m04" podCIDRs=["10.244.3.0/24"]
	I1011 21:20:18.968119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.968257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:18.984966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:19.260924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.121280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:20.397093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.070457       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-610874-m04"
	I1011 21:20:23.072402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.132945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.420908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:23.568334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:29.120840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:39.562762       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:20:39.580852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:40.377354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:20:49.215156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m04"
	I1011 21:21:38.097956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-610874-m04"
	I1011 21:21:38.098503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.132013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:38.234358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.800775ms"
	I1011 21:21:38.234458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.4µs"
	I1011 21:21:38.464262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	I1011 21:21:43.340055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-610874-m02"
	
	
	==> kube-proxy [4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 21:17:05.854510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 21:17:05.879022       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	E1011 21:17:05.879501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 21:17:05.914134       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 21:17:05.914253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 21:17:05.914286       1 server_linux.go:169] "Using iptables Proxier"
	I1011 21:17:05.916891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 21:17:05.917757       1 server.go:483] "Version info" version="v1.31.1"
	I1011 21:17:05.917796       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:17:05.920479       1 config.go:199] "Starting service config controller"
	I1011 21:17:05.920740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 21:17:05.920939       1 config.go:105] "Starting endpoint slice config controller"
	I1011 21:17:05.920964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 21:17:05.921847       1 config.go:328] "Starting node config controller"
	I1011 21:17:05.921877       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 21:17:06.021605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 21:17:06.021672       1 shared_informer.go:320] Caches are synced for service config
	I1011 21:17:06.021955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94] <==
	W1011 21:16:56.914961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:56.914997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:56.955611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 21:16:56.955698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.100673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 21:16:57.100737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.117148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.117326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.263820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 21:16:57.264353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.296892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 21:16:57.297090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.359800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 21:16:57.360057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 21:16:57.555273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 21:16:57.555402       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 21:17:00.497419       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1011 21:20:19.054608       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7dn76" node="ha-610874-m04"
	E1011 21:20:19.055446       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7dn76\": pod kindnet-7dn76 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-7dn76"
	E1011 21:20:19.188470       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dz2h8" node="ha-610874-m04"
	E1011 21:20:19.188552       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dz2h8\": pod kindnet-dz2h8 is already assigned to node \"ha-610874-m04\"" pod="kube-system/kindnet-dz2h8"
	E1011 21:20:19.193309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	E1011 21:20:19.195518       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f3a80da1-771c-458b-85ce-bff2b7759d1e(kube-system/kube-proxy-ht4ns) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ht4ns"
	E1011 21:20:19.195828       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ht4ns\": pod kube-proxy-ht4ns is already assigned to node \"ha-610874-m04\"" pod="kube-system/kube-proxy-ht4ns"
	I1011 21:20:19.196036       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ht4ns" node="ha-610874-m04"
	
	
	==> kubelet <==
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038549    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:09 ha-610874 kubelet[1312]: E1011 21:22:09.038630    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681729038152223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040811    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:19 ha-610874 kubelet[1312]: E1011 21:22:19.040841    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681739040432589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.042974    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:29 ha-610874 kubelet[1312]: E1011 21:22:29.043019    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681749042594287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044063    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:39 ha-610874 kubelet[1312]: E1011 21:22:39.044089    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681759043815866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045695    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:49 ha-610874 kubelet[1312]: E1011 21:22:49.045734    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681769045448487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:58 ha-610874 kubelet[1312]: E1011 21:22:58.943175    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 21:22:58 ha-610874 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 21:22:58 ha-610874 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.046933    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:22:59 ha-610874 kubelet[1312]: E1011 21:22:59.047037    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681779046714955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049554    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:09 ha-610874 kubelet[1312]: E1011 21:23:09.049631    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681789048818103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.053671    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:19 ha-610874 kubelet[1312]: E1011 21:23:19.054088    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681799053044733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:29 ha-610874 kubelet[1312]: E1011 21:23:29.057472    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681809056986667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:29 ha-610874 kubelet[1312]: E1011 21:23:29.057867    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681809056986667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:39 ha-610874 kubelet[1312]: E1011 21:23:39.059542    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681819058994564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 21:23:39 ha-610874 kubelet[1312]: E1011 21:23:39.059569    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728681819058994564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-610874 -n ha-610874
helpers_test.go:261: (dbg) Run:  kubectl --context ha-610874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-610874 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-610874 -v=7 --alsologtostderr
E1011 21:25:24.492072   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-610874 -v=7 --alsologtostderr: exit status 82 (2m1.888024656s)

                                                
                                                
-- stdout --
	* Stopping node "ha-610874-m04"  ...
	* Stopping node "ha-610874-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:23:39.975369   35297 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:23:39.975486   35297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:23:39.975496   35297 out.go:358] Setting ErrFile to fd 2...
	I1011 21:23:39.975502   35297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:23:39.975671   35297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:23:39.975891   35297 out.go:352] Setting JSON to false
	I1011 21:23:39.975984   35297 mustload.go:65] Loading cluster: ha-610874
	I1011 21:23:39.976371   35297 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:23:39.976471   35297 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:23:39.976668   35297 mustload.go:65] Loading cluster: ha-610874
	I1011 21:23:39.976822   35297 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:23:39.976865   35297 stop.go:39] StopHost: ha-610874-m04
	I1011 21:23:39.977248   35297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:23:39.977301   35297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:23:39.992238   35297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35527
	I1011 21:23:39.992678   35297 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:23:39.993228   35297 main.go:141] libmachine: Using API Version  1
	I1011 21:23:39.993248   35297 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:23:39.993545   35297 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:23:39.996006   35297 out.go:177] * Stopping node "ha-610874-m04"  ...
	I1011 21:23:39.997119   35297 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 21:23:39.997148   35297 main.go:141] libmachine: (ha-610874-m04) Calling .DriverName
	I1011 21:23:39.997334   35297 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 21:23:39.997354   35297 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHHostname
	I1011 21:23:40.000107   35297 main.go:141] libmachine: (ha-610874-m04) DBG | domain ha-610874-m04 has defined MAC address 52:54:00:4d:ac:22 in network mk-ha-610874
	I1011 21:23:40.000489   35297 main.go:141] libmachine: (ha-610874-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:ac:22", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:20:04 +0000 UTC Type:0 Mac:52:54:00:4d:ac:22 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-610874-m04 Clientid:01:52:54:00:4d:ac:22}
	I1011 21:23:40.000513   35297 main.go:141] libmachine: (ha-610874-m04) DBG | domain ha-610874-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:4d:ac:22 in network mk-ha-610874
	I1011 21:23:40.000613   35297 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHPort
	I1011 21:23:40.000772   35297 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHKeyPath
	I1011 21:23:40.000914   35297 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHUsername
	I1011 21:23:40.001041   35297 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m04/id_rsa Username:docker}
	I1011 21:23:40.091873   35297 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 21:23:40.145026   35297 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 21:23:40.199661   35297 main.go:141] libmachine: Stopping "ha-610874-m04"...
	I1011 21:23:40.199684   35297 main.go:141] libmachine: (ha-610874-m04) Calling .GetState
	I1011 21:23:40.201171   35297 main.go:141] libmachine: (ha-610874-m04) Calling .Stop
	I1011 21:23:40.205661   35297 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 0/120
	I1011 21:23:41.397247   35297 main.go:141] libmachine: (ha-610874-m04) Calling .GetState
	I1011 21:23:41.398394   35297 main.go:141] libmachine: Machine "ha-610874-m04" was stopped.
	I1011 21:23:41.398409   35297 stop.go:75] duration metric: took 1.401290532s to stop
	I1011 21:23:41.398442   35297 stop.go:39] StopHost: ha-610874-m03
	I1011 21:23:41.398749   35297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:23:41.398795   35297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:23:41.412838   35297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I1011 21:23:41.413286   35297 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:23:41.413771   35297 main.go:141] libmachine: Using API Version  1
	I1011 21:23:41.413797   35297 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:23:41.414117   35297 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:23:41.415801   35297 out.go:177] * Stopping node "ha-610874-m03"  ...
	I1011 21:23:41.416929   35297 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 21:23:41.416948   35297 main.go:141] libmachine: (ha-610874-m03) Calling .DriverName
	I1011 21:23:41.417147   35297 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 21:23:41.417169   35297 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHHostname
	I1011 21:23:41.419523   35297 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:23:41.419942   35297 main.go:141] libmachine: (ha-610874-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:11:ff", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:18:34 +0000 UTC Type:0 Mac:52:54:00:54:11:ff Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-610874-m03 Clientid:01:52:54:00:54:11:ff}
	I1011 21:23:41.419972   35297 main.go:141] libmachine: (ha-610874-m03) DBG | domain ha-610874-m03 has defined IP address 192.168.39.222 and MAC address 52:54:00:54:11:ff in network mk-ha-610874
	I1011 21:23:41.420109   35297 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHPort
	I1011 21:23:41.420272   35297 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHKeyPath
	I1011 21:23:41.420406   35297 main.go:141] libmachine: (ha-610874-m03) Calling .GetSSHUsername
	I1011 21:23:41.420535   35297 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m03/id_rsa Username:docker}
	I1011 21:23:41.518908   35297 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 21:23:41.572272   35297 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 21:23:41.629614   35297 main.go:141] libmachine: Stopping "ha-610874-m03"...
	I1011 21:23:41.629668   35297 main.go:141] libmachine: (ha-610874-m03) Calling .GetState
	I1011 21:23:41.631124   35297 main.go:141] libmachine: (ha-610874-m03) Calling .Stop
	I1011 21:23:41.634383   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 0/120
	I1011 21:23:42.635779   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 1/120
	I1011 21:23:43.637023   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 2/120
	I1011 21:23:44.638424   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 3/120
	I1011 21:23:45.639573   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 4/120
	I1011 21:23:46.641427   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 5/120
	I1011 21:23:47.642849   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 6/120
	I1011 21:23:48.644387   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 7/120
	I1011 21:23:49.645905   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 8/120
	I1011 21:23:50.647392   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 9/120
	I1011 21:23:51.649184   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 10/120
	I1011 21:23:52.651058   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 11/120
	I1011 21:23:53.653145   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 12/120
	I1011 21:23:54.654659   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 13/120
	I1011 21:23:55.656336   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 14/120
	I1011 21:23:56.658342   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 15/120
	I1011 21:23:57.659824   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 16/120
	I1011 21:23:58.661420   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 17/120
	I1011 21:23:59.662887   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 18/120
	I1011 21:24:00.664535   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 19/120
	I1011 21:24:01.666354   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 20/120
	I1011 21:24:02.667923   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 21/120
	I1011 21:24:03.669316   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 22/120
	I1011 21:24:04.670730   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 23/120
	I1011 21:24:05.672140   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 24/120
	I1011 21:24:06.673856   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 25/120
	I1011 21:24:07.675067   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 26/120
	I1011 21:24:08.676424   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 27/120
	I1011 21:24:09.677657   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 28/120
	I1011 21:24:10.678970   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 29/120
	I1011 21:24:11.680530   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 30/120
	I1011 21:24:12.681728   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 31/120
	I1011 21:24:13.683000   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 32/120
	I1011 21:24:14.684259   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 33/120
	I1011 21:24:15.685512   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 34/120
	I1011 21:24:16.687173   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 35/120
	I1011 21:24:17.688273   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 36/120
	I1011 21:24:18.689444   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 37/120
	I1011 21:24:19.691066   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 38/120
	I1011 21:24:20.693131   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 39/120
	I1011 21:24:21.694967   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 40/120
	I1011 21:24:22.697158   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 41/120
	I1011 21:24:23.698350   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 42/120
	I1011 21:24:24.699667   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 43/120
	I1011 21:24:25.700803   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 44/120
	I1011 21:24:26.702435   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 45/120
	I1011 21:24:27.703782   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 46/120
	I1011 21:24:28.705386   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 47/120
	I1011 21:24:29.706635   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 48/120
	I1011 21:24:30.707943   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 49/120
	I1011 21:24:31.709217   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 50/120
	I1011 21:24:32.710568   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 51/120
	I1011 21:24:33.711845   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 52/120
	I1011 21:24:34.713151   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 53/120
	I1011 21:24:35.714425   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 54/120
	I1011 21:24:36.716001   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 55/120
	I1011 21:24:37.717450   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 56/120
	I1011 21:24:38.718929   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 57/120
	I1011 21:24:39.720118   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 58/120
	I1011 21:24:40.721464   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 59/120
	I1011 21:24:41.723613   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 60/120
	I1011 21:24:42.725268   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 61/120
	I1011 21:24:43.726650   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 62/120
	I1011 21:24:44.727986   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 63/120
	I1011 21:24:45.729552   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 64/120
	I1011 21:24:46.731467   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 65/120
	I1011 21:24:47.732678   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 66/120
	I1011 21:24:48.734102   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 67/120
	I1011 21:24:49.735289   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 68/120
	I1011 21:24:50.736614   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 69/120
	I1011 21:24:51.738676   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 70/120
	I1011 21:24:52.739814   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 71/120
	I1011 21:24:53.741105   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 72/120
	I1011 21:24:54.742261   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 73/120
	I1011 21:24:55.743492   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 74/120
	I1011 21:24:56.744989   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 75/120
	I1011 21:24:57.746395   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 76/120
	I1011 21:24:58.747704   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 77/120
	I1011 21:24:59.749017   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 78/120
	I1011 21:25:00.750187   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 79/120
	I1011 21:25:01.751899   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 80/120
	I1011 21:25:02.753100   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 81/120
	I1011 21:25:03.754286   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 82/120
	I1011 21:25:04.755653   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 83/120
	I1011 21:25:05.757179   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 84/120
	I1011 21:25:06.759051   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 85/120
	I1011 21:25:07.760567   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 86/120
	I1011 21:25:08.761863   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 87/120
	I1011 21:25:09.763374   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 88/120
	I1011 21:25:10.765024   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 89/120
	I1011 21:25:11.766371   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 90/120
	I1011 21:25:12.767976   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 91/120
	I1011 21:25:13.769243   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 92/120
	I1011 21:25:14.770418   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 93/120
	I1011 21:25:15.771687   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 94/120
	I1011 21:25:16.773895   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 95/120
	I1011 21:25:17.775188   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 96/120
	I1011 21:25:18.776460   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 97/120
	I1011 21:25:19.778020   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 98/120
	I1011 21:25:20.779257   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 99/120
	I1011 21:25:21.781549   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 100/120
	I1011 21:25:22.782960   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 101/120
	I1011 21:25:23.784128   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 102/120
	I1011 21:25:24.785791   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 103/120
	I1011 21:25:25.786928   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 104/120
	I1011 21:25:26.788414   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 105/120
	I1011 21:25:27.790183   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 106/120
	I1011 21:25:28.791450   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 107/120
	I1011 21:25:29.792914   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 108/120
	I1011 21:25:30.794312   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 109/120
	I1011 21:25:31.795950   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 110/120
	I1011 21:25:32.797964   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 111/120
	I1011 21:25:33.799307   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 112/120
	I1011 21:25:34.800600   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 113/120
	I1011 21:25:35.801878   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 114/120
	I1011 21:25:36.804124   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 115/120
	I1011 21:25:37.806194   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 116/120
	I1011 21:25:38.807830   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 117/120
	I1011 21:25:39.809146   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 118/120
	I1011 21:25:40.810531   35297 main.go:141] libmachine: (ha-610874-m03) Waiting for machine to stop 119/120
	I1011 21:25:41.811259   35297 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1011 21:25:41.811328   35297 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1011 21:25:41.813222   35297 out.go:201] 
	W1011 21:25:41.814507   35297 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1011 21:25:41.814520   35297 out.go:270] * 
	* 
	W1011 21:25:41.816572   35297 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 21:25:41.817878   35297 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-610874 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-610874 --wait=true -v=7 --alsologtostderr
E1011 21:25:52.193325   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:27:06.383419   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:28:29.454087   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-610874 --wait=true -v=7 --alsologtostderr: (3m58.880942916s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-610874
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-610874 -n ha-610874
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 logs -n 25: (2.311513942s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m04 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp testdata/cp-test.txt                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m03 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-610874 node stop m02 -v=7                                                     | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-610874 node start m02 -v=7                                                    | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-610874 -v=7                                                           | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-610874 -v=7                                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-610874 --wait=true -v=7                                                    | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:25 UTC | 11 Oct 24 21:29 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-610874                                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:29 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:25:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:25:41.865984   35758 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:25:41.866075   35758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:25:41.866083   35758 out.go:358] Setting ErrFile to fd 2...
	I1011 21:25:41.866087   35758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:25:41.866257   35758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:25:41.866948   35758 out.go:352] Setting JSON to false
	I1011 21:25:41.867818   35758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4087,"bootTime":1728677855,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:25:41.867947   35758 start.go:139] virtualization: kvm guest
	I1011 21:25:41.870106   35758 out.go:177] * [ha-610874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:25:41.871650   35758 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:25:41.871688   35758 notify.go:220] Checking for updates...
	I1011 21:25:41.873898   35758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:25:41.875077   35758 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:25:41.876222   35758 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:25:41.877352   35758 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:25:41.878470   35758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:25:41.879772   35758 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:25:41.879869   35758 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:25:41.880321   35758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:25:41.880362   35758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:25:41.895047   35758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I1011 21:25:41.895555   35758 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:25:41.896048   35758 main.go:141] libmachine: Using API Version  1
	I1011 21:25:41.896067   35758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:25:41.896359   35758 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:25:41.896536   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:25:41.932372   35758 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 21:25:41.933519   35758 start.go:297] selected driver: kvm2
	I1011 21:25:41.933533   35758 start.go:901] validating driver "kvm2" against &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:25:41.933659   35758 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:25:41.933998   35758 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:25:41.934071   35758 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:25:41.947927   35758 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:25:41.948568   35758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:25:41.948604   35758 cni.go:84] Creating CNI manager for ""
	I1011 21:25:41.948658   35758 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1011 21:25:41.948723   35758 start.go:340] cluster config:
	{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:25:41.948867   35758 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:25:41.951013   35758 out.go:177] * Starting "ha-610874" primary control-plane node in "ha-610874" cluster
	I1011 21:25:41.951940   35758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:25:41.951972   35758 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:25:41.951984   35758 cache.go:56] Caching tarball of preloaded images
	I1011 21:25:41.952069   35758 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:25:41.952080   35758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:25:41.952206   35758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:25:41.952414   35758 start.go:360] acquireMachinesLock for ha-610874: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:25:41.952468   35758 start.go:364] duration metric: took 36.13µs to acquireMachinesLock for "ha-610874"
	I1011 21:25:41.952487   35758 start.go:96] Skipping create...Using existing machine configuration
	I1011 21:25:41.952496   35758 fix.go:54] fixHost starting: 
	I1011 21:25:41.952757   35758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:25:41.952794   35758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:25:41.966359   35758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I1011 21:25:41.966873   35758 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:25:41.967368   35758 main.go:141] libmachine: Using API Version  1
	I1011 21:25:41.967384   35758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:25:41.967706   35758 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:25:41.967862   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:25:41.968023   35758 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:25:41.969605   35758 fix.go:112] recreateIfNeeded on ha-610874: state=Running err=<nil>
	W1011 21:25:41.969629   35758 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 21:25:41.971228   35758 out.go:177] * Updating the running kvm2 "ha-610874" VM ...
	I1011 21:25:41.972244   35758 machine.go:93] provisionDockerMachine start ...
	I1011 21:25:41.972259   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:25:41.972427   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:41.974586   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:41.975012   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:41.975034   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:41.975145   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:41.975273   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:41.975414   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:41.975546   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:41.975687   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:41.975860   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:41.975871   35758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 21:25:42.092032   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:25:42.092065   35758 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:25:42.092286   35758 buildroot.go:166] provisioning hostname "ha-610874"
	I1011 21:25:42.092314   35758 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:25:42.092494   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.095150   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.095535   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.095569   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.095726   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.095904   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.096062   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.096178   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.096351   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:42.096558   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:42.096572   35758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874 && echo "ha-610874" | sudo tee /etc/hostname
	I1011 21:25:42.227142   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:25:42.227164   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.229708   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.230053   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.230074   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.230237   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.230406   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.230574   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.230704   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.230817   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:42.230980   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:42.230994   35758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:25:42.347624   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:25:42.347648   35758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:25:42.347663   35758 buildroot.go:174] setting up certificates
	I1011 21:25:42.347674   35758 provision.go:84] configureAuth start
	I1011 21:25:42.347684   35758 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:25:42.347954   35758 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:25:42.350461   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.350857   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.350876   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.351080   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.353537   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.353857   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.353888   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.353987   35758 provision.go:143] copyHostCerts
	I1011 21:25:42.354010   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:25:42.354045   35758 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:25:42.354058   35758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:25:42.354125   35758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:25:42.354191   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:25:42.354219   35758 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:25:42.354226   35758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:25:42.354249   35758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:25:42.354292   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:25:42.354308   35758 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:25:42.354314   35758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:25:42.354336   35758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:25:42.354390   35758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874 san=[127.0.0.1 192.168.39.10 ha-610874 localhost minikube]
	I1011 21:25:42.400146   35758 provision.go:177] copyRemoteCerts
	I1011 21:25:42.400197   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:25:42.400222   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.402685   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.403052   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.403075   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.403219   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.403394   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.403529   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.403630   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:25:42.494067   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:25:42.494141   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:25:42.520514   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:25:42.520596   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1011 21:25:42.546929   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:25:42.547006   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:25:42.571540   35758 provision.go:87] duration metric: took 223.855512ms to configureAuth
	I1011 21:25:42.571564   35758 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:25:42.571804   35758 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:25:42.571891   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.574450   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.574909   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.574937   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.575121   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.575321   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.575480   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.575623   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.575786   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:42.575983   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:42.576001   35758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:27:13.397650   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:27:13.397675   35758 machine.go:96] duration metric: took 1m31.425420131s to provisionDockerMachine
	I1011 21:27:13.397689   35758 start.go:293] postStartSetup for "ha-610874" (driver="kvm2")
	I1011 21:27:13.397703   35758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:27:13.397744   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.398084   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:27:13.398117   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.401442   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.401908   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.401926   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.402080   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.402286   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.402448   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.402556   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:27:13.489647   35758 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:27:13.493850   35758 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:27:13.493872   35758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:27:13.493934   35758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:27:13.494011   35758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:27:13.494020   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:27:13.494109   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:27:13.503176   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:27:13.530740   35758 start.go:296] duration metric: took 133.036947ms for postStartSetup
	I1011 21:27:13.530781   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.531040   35758 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1011 21:27:13.531061   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.533565   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.534021   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.534062   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.534214   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.534418   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.534662   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.534845   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	W1011 21:27:13.621467   35758 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1011 21:27:13.621491   35758 fix.go:56] duration metric: took 1m31.668996621s for fixHost
	I1011 21:27:13.621511   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.624463   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.624827   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.624867   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.625044   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.625249   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.625383   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.625631   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.625819   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:27:13.625988   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:27:13.625998   35758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:27:13.739820   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728682033.695405341
	
	I1011 21:27:13.739846   35758 fix.go:216] guest clock: 1728682033.695405341
	I1011 21:27:13.739855   35758 fix.go:229] Guest: 2024-10-11 21:27:13.695405341 +0000 UTC Remote: 2024-10-11 21:27:13.621498056 +0000 UTC m=+91.792554019 (delta=73.907285ms)
	I1011 21:27:13.739874   35758 fix.go:200] guest clock delta is within tolerance: 73.907285ms
	I1011 21:27:13.739879   35758 start.go:83] releasing machines lock for "ha-610874", held for 1m31.787400304s
	I1011 21:27:13.739897   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.740136   35758 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:27:13.743172   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.743570   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.743605   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.743755   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.744239   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.744385   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.744500   35758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:27:13.744544   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.744582   35758 ssh_runner.go:195] Run: cat /version.json
	I1011 21:27:13.744604   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.747134   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747374   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747591   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.747620   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747775   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.747814   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.747841   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747939   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.748019   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.748147   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.748151   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.748269   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:27:13.748330   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.748472   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:27:13.851227   35758 ssh_runner.go:195] Run: systemctl --version
	I1011 21:27:13.857217   35758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:27:14.027859   35758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:27:14.033636   35758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:27:14.033703   35758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:27:14.042533   35758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1011 21:27:14.042555   35758 start.go:495] detecting cgroup driver to use...
	I1011 21:27:14.042608   35758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:27:14.058271   35758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:27:14.071785   35758 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:27:14.071833   35758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:27:14.084811   35758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:27:14.097838   35758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:27:14.274228   35758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:27:14.433674   35758 docker.go:233] disabling docker service ...
	I1011 21:27:14.433753   35758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:27:14.453879   35758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:27:14.467983   35758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:27:14.634387   35758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:27:14.778261   35758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:27:14.792975   35758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:27:14.811746   35758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:27:14.811817   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.822547   35758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:27:14.822610   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.835095   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.847123   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.859235   35758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:27:14.871534   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.883686   35758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.894338   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.905153   35758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:27:14.915082   35758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:27:14.924895   35758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:27:15.066392   35758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:27:19.357484   35758 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.291053564s)
	I1011 21:27:19.357514   35758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:27:19.357565   35758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:27:19.363008   35758 start.go:563] Will wait 60s for crictl version
	I1011 21:27:19.363051   35758 ssh_runner.go:195] Run: which crictl
	I1011 21:27:19.366764   35758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:27:19.411413   35758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:27:19.411497   35758 ssh_runner.go:195] Run: crio --version
	I1011 21:27:19.440951   35758 ssh_runner.go:195] Run: crio --version
	I1011 21:27:19.472014   35758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:27:19.473444   35758 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:27:19.476141   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:19.476627   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:19.476653   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:19.476828   35758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:27:19.481431   35758 kubeadm.go:883] updating cluster {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:27:19.481562   35758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:27:19.481605   35758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:27:19.525052   35758 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:27:19.525070   35758 crio.go:433] Images already preloaded, skipping extraction
	I1011 21:27:19.525119   35758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:27:19.560180   35758 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:27:19.560210   35758 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:27:19.560221   35758 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1011 21:27:19.560312   35758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:27:19.560389   35758 ssh_runner.go:195] Run: crio config
	I1011 21:27:19.618816   35758 cni.go:84] Creating CNI manager for ""
	I1011 21:27:19.618837   35758 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1011 21:27:19.618847   35758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:27:19.618873   35758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-610874 NodeName:ha-610874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:27:19.618992   35758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-610874"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:27:19.619012   35758 kube-vip.go:115] generating kube-vip config ...
	I1011 21:27:19.619050   35758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:27:19.631210   35758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:27:19.631329   35758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:27:19.631388   35758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:27:19.641233   35758 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:27:19.641292   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1011 21:27:19.651185   35758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1011 21:27:19.667772   35758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:27:19.683352   35758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1011 21:27:19.698816   35758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:27:19.716977   35758 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:27:19.720864   35758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:27:19.860158   35758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:27:19.875764   35758 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.10
	I1011 21:27:19.875785   35758 certs.go:194] generating shared ca certs ...
	I1011 21:27:19.875800   35758 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:27:19.875948   35758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:27:19.876005   35758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:27:19.876018   35758 certs.go:256] generating profile certs ...
	I1011 21:27:19.876122   35758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:27:19.876155   35758 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2
	I1011 21:27:19.876186   35758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.222 192.168.39.254]
	I1011 21:27:19.975371   35758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2 ...
	I1011 21:27:19.975398   35758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2: {Name:mk8f34b9b908e3bae8427d3296dba7b7258c76a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:27:19.975593   35758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2 ...
	I1011 21:27:19.975611   35758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2: {Name:mkeb1e23517da252f1fb5610dc6482f7a2201a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:27:19.975703   35758 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:27:19.975883   35758 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:27:19.976058   35758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:27:19.976076   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:27:19.976094   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:27:19.976114   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:27:19.976132   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:27:19.976149   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:27:19.976163   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:27:19.976193   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:27:19.976212   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:27:19.976292   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:27:19.976342   35758 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:27:19.976355   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:27:19.976383   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:27:19.976421   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:27:19.976453   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:27:19.976509   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:27:19.976544   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:19.976587   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:27:19.976607   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:27:19.977223   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:27:20.003537   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:27:20.026472   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:27:20.049398   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:27:20.072982   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 21:27:20.095721   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:27:20.118642   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:27:20.141400   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:27:20.164448   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:27:20.187900   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:27:20.262658   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:27:20.469642   35758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:27:20.551917   35758 ssh_runner.go:195] Run: openssl version
	I1011 21:27:20.566260   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:27:20.651044   35758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:27:20.693393   35758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:27:20.693469   35758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:27:20.742057   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:27:20.816298   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:27:20.893531   35758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:20.910194   35758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:20.910257   35758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:20.919111   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:27:20.944568   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:27:21.023507   35758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:27:21.042361   35758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:27:21.042432   35758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:27:21.054420   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:27:21.075798   35758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:27:21.080371   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 21:27:21.088087   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 21:27:21.097563   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 21:27:21.106330   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 21:27:21.115047   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 21:27:21.120960   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 21:27:21.131039   35758 kubeadm.go:392] StartCluster: {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:27:21.131205   35758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:27:21.131271   35758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:27:21.209890   35758 cri.go:89] found id: "8fc935896e7caf165806222d235c011d17bea7f8ecadaa3a8ec11d0dd508c62d"
	I1011 21:27:21.209908   35758 cri.go:89] found id: "8a8fa63b97fe4d909c8f0a382b18cefe67c929962963cece0bb6c2da1b8395ba"
	I1011 21:27:21.209912   35758 cri.go:89] found id: "5f196aa4e7ce7a6d72460018492de36bc0daa02e71cfd4b00e217a95db939af8"
	I1011 21:27:21.209916   35758 cri.go:89] found id: "bca972460125e08a34c49894b1ac27e42e558c7a79f29bf4d96d8f79828b3e15"
	I1011 21:27:21.209918   35758 cri.go:89] found id: "c94234ade47686228babd59e946ecb1b71d35e2f394f3890d23489db5cd3d710"
	I1011 21:27:21.209922   35758 cri.go:89] found id: "ac474faa2108e43e4d5c0be813e9b95327575ad45d2fedfdf013c83b9372af19"
	I1011 21:27:21.209924   35758 cri.go:89] found id: "73ccd09cbaf150f3ef0e0481e2c7ce6f57c07496a0d181d24281fd0ece093fe9"
	I1011 21:27:21.209926   35758 cri.go:89] found id: "0aaa9aaea4517ddc411226242274f70147ec7288f683d8f0e48d094689650332"
	I1011 21:27:21.209937   35758 cri.go:89] found id: "add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6"
	I1011 21:27:21.209941   35758 cri.go:89] found id: "f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb"
	I1011 21:27:21.209953   35758 cri.go:89] found id: "01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536"
	I1011 21:27:21.209959   35758 cri.go:89] found id: "9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952"
	I1011 21:27:21.209961   35758 cri.go:89] found id: "4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b"
	I1011 21:27:21.209964   35758 cri.go:89] found id: "7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d"
	I1011 21:27:21.209968   35758 cri.go:89] found id: "1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865"
	I1011 21:27:21.209973   35758 cri.go:89] found id: "093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94"
	I1011 21:27:21.209976   35758 cri.go:89] found id: "b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948"
	I1011 21:27:21.209980   35758 cri.go:89] found id: "1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a"
	I1011 21:27:21.209984   35758 cri.go:89] found id: ""
	I1011 21:27:21.210021   35758 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-610874 -n ha-610874
helpers_test.go:261: (dbg) Run:  kubectl --context ha-610874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 stop -v=7 --alsologtostderr
E1011 21:30:24.492721   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-610874 stop -v=7 --alsologtostderr: exit status 82 (2m0.458671616s)

                                                
                                                
-- stdout --
	* Stopping node "ha-610874-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:30:01.120611   37545 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:30:01.120707   37545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:30:01.120715   37545 out.go:358] Setting ErrFile to fd 2...
	I1011 21:30:01.120719   37545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:30:01.120875   37545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:30:01.121070   37545 out.go:352] Setting JSON to false
	I1011 21:30:01.121141   37545 mustload.go:65] Loading cluster: ha-610874
	I1011 21:30:01.121593   37545 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:30:01.121679   37545 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:30:01.121853   37545 mustload.go:65] Loading cluster: ha-610874
	I1011 21:30:01.121974   37545 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:30:01.122001   37545 stop.go:39] StopHost: ha-610874-m04
	I1011 21:30:01.122455   37545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:30:01.122496   37545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:30:01.139385   37545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I1011 21:30:01.139823   37545 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:30:01.140411   37545 main.go:141] libmachine: Using API Version  1
	I1011 21:30:01.140442   37545 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:30:01.140778   37545 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:30:01.142869   37545 out.go:177] * Stopping node "ha-610874-m04"  ...
	I1011 21:30:01.144506   37545 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 21:30:01.144538   37545 main.go:141] libmachine: (ha-610874-m04) Calling .DriverName
	I1011 21:30:01.144813   37545 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 21:30:01.144838   37545 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHHostname
	I1011 21:30:01.147949   37545 main.go:141] libmachine: (ha-610874-m04) DBG | domain ha-610874-m04 has defined MAC address 52:54:00:4d:ac:22 in network mk-ha-610874
	I1011 21:30:01.148351   37545 main.go:141] libmachine: (ha-610874-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:ac:22", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:29:29 +0000 UTC Type:0 Mac:52:54:00:4d:ac:22 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-610874-m04 Clientid:01:52:54:00:4d:ac:22}
	I1011 21:30:01.148372   37545 main.go:141] libmachine: (ha-610874-m04) DBG | domain ha-610874-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:4d:ac:22 in network mk-ha-610874
	I1011 21:30:01.148568   37545 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHPort
	I1011 21:30:01.148757   37545 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHKeyPath
	I1011 21:30:01.148912   37545 main.go:141] libmachine: (ha-610874-m04) Calling .GetSSHUsername
	I1011 21:30:01.149033   37545 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874-m04/id_rsa Username:docker}
	I1011 21:30:01.234685   37545 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 21:30:01.271615   37545 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 21:30:01.325379   37545 main.go:141] libmachine: Stopping "ha-610874-m04"...
	I1011 21:30:01.325413   37545 main.go:141] libmachine: (ha-610874-m04) Calling .GetState
	I1011 21:30:01.326975   37545 main.go:141] libmachine: (ha-610874-m04) Calling .Stop
	I1011 21:30:01.331204   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 0/120
	I1011 21:30:02.332866   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 1/120
	I1011 21:30:03.334730   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 2/120
	I1011 21:30:04.336253   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 3/120
	I1011 21:30:05.337955   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 4/120
	I1011 21:30:06.340183   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 5/120
	I1011 21:30:07.341629   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 6/120
	I1011 21:30:08.344000   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 7/120
	I1011 21:30:09.345556   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 8/120
	I1011 21:30:10.347362   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 9/120
	I1011 21:30:11.349387   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 10/120
	I1011 21:30:12.351884   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 11/120
	I1011 21:30:13.353903   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 12/120
	I1011 21:30:14.355522   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 13/120
	I1011 21:30:15.357085   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 14/120
	I1011 21:30:16.359307   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 15/120
	I1011 21:30:17.361417   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 16/120
	I1011 21:30:18.362651   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 17/120
	I1011 21:30:19.364110   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 18/120
	I1011 21:30:20.365998   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 19/120
	I1011 21:30:21.368199   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 20/120
	I1011 21:30:22.369679   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 21/120
	I1011 21:30:23.371013   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 22/120
	I1011 21:30:24.373111   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 23/120
	I1011 21:30:25.374501   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 24/120
	I1011 21:30:26.376464   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 25/120
	I1011 21:30:27.377702   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 26/120
	I1011 21:30:28.378939   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 27/120
	I1011 21:30:29.380203   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 28/120
	I1011 21:30:30.381645   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 29/120
	I1011 21:30:31.384015   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 30/120
	I1011 21:30:32.385539   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 31/120
	I1011 21:30:33.386763   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 32/120
	I1011 21:30:34.388105   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 33/120
	I1011 21:30:35.389616   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 34/120
	I1011 21:30:36.391432   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 35/120
	I1011 21:30:37.392906   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 36/120
	I1011 21:30:38.394375   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 37/120
	I1011 21:30:39.395687   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 38/120
	I1011 21:30:40.397965   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 39/120
	I1011 21:30:41.400020   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 40/120
	I1011 21:30:42.401348   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 41/120
	I1011 21:30:43.402678   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 42/120
	I1011 21:30:44.404232   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 43/120
	I1011 21:30:45.405583   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 44/120
	I1011 21:30:46.407905   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 45/120
	I1011 21:30:47.409362   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 46/120
	I1011 21:30:48.410968   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 47/120
	I1011 21:30:49.413089   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 48/120
	I1011 21:30:50.414567   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 49/120
	I1011 21:30:51.415893   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 50/120
	I1011 21:30:52.417276   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 51/120
	I1011 21:30:53.418950   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 52/120
	I1011 21:30:54.420052   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 53/120
	I1011 21:30:55.421508   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 54/120
	I1011 21:30:56.423734   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 55/120
	I1011 21:30:57.425194   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 56/120
	I1011 21:30:58.426445   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 57/120
	I1011 21:30:59.427734   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 58/120
	I1011 21:31:00.429364   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 59/120
	I1011 21:31:01.431325   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 60/120
	I1011 21:31:02.432698   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 61/120
	I1011 21:31:03.433992   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 62/120
	I1011 21:31:04.435428   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 63/120
	I1011 21:31:05.436752   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 64/120
	I1011 21:31:06.438746   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 65/120
	I1011 21:31:07.440081   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 66/120
	I1011 21:31:08.441281   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 67/120
	I1011 21:31:09.442649   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 68/120
	I1011 21:31:10.444054   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 69/120
	I1011 21:31:11.446027   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 70/120
	I1011 21:31:12.447437   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 71/120
	I1011 21:31:13.449022   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 72/120
	I1011 21:31:14.450548   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 73/120
	I1011 21:31:15.451866   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 74/120
	I1011 21:31:16.453851   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 75/120
	I1011 21:31:17.455241   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 76/120
	I1011 21:31:18.456465   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 77/120
	I1011 21:31:19.457759   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 78/120
	I1011 21:31:20.459055   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 79/120
	I1011 21:31:21.461130   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 80/120
	I1011 21:31:22.462388   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 81/120
	I1011 21:31:23.463792   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 82/120
	I1011 21:31:24.465054   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 83/120
	I1011 21:31:25.466450   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 84/120
	I1011 21:31:26.468228   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 85/120
	I1011 21:31:27.469621   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 86/120
	I1011 21:31:28.470951   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 87/120
	I1011 21:31:29.473040   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 88/120
	I1011 21:31:30.474499   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 89/120
	I1011 21:31:31.476830   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 90/120
	I1011 21:31:32.478131   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 91/120
	I1011 21:31:33.480126   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 92/120
	I1011 21:31:34.481561   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 93/120
	I1011 21:31:35.483658   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 94/120
	I1011 21:31:36.485457   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 95/120
	I1011 21:31:37.486669   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 96/120
	I1011 21:31:38.487889   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 97/120
	I1011 21:31:39.489294   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 98/120
	I1011 21:31:40.490827   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 99/120
	I1011 21:31:41.492926   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 100/120
	I1011 21:31:42.494255   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 101/120
	I1011 21:31:43.496094   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 102/120
	I1011 21:31:44.497506   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 103/120
	I1011 21:31:45.498891   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 104/120
	I1011 21:31:46.500652   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 105/120
	I1011 21:31:47.502244   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 106/120
	I1011 21:31:48.503793   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 107/120
	I1011 21:31:49.505491   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 108/120
	I1011 21:31:50.506979   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 109/120
	I1011 21:31:51.509046   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 110/120
	I1011 21:31:52.510425   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 111/120
	I1011 21:31:53.511795   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 112/120
	I1011 21:31:54.513121   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 113/120
	I1011 21:31:55.514383   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 114/120
	I1011 21:31:56.516328   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 115/120
	I1011 21:31:57.517942   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 116/120
	I1011 21:31:58.519466   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 117/120
	I1011 21:31:59.521390   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 118/120
	I1011 21:32:00.522783   37545 main.go:141] libmachine: (ha-610874-m04) Waiting for machine to stop 119/120
	I1011 21:32:01.523624   37545 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1011 21:32:01.523668   37545 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1011 21:32:01.526064   37545 out.go:201] 
	W1011 21:32:01.527877   37545 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1011 21:32:01.527900   37545 out.go:270] * 
	* 
	W1011 21:32:01.530201   37545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 21:32:01.531475   37545 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-610874 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
E1011 21:32:06.383563   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr: (18.898030551s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-610874 -n ha-610874
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 logs -n 25: (1.980167097s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m04 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp testdata/cp-test.txt                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt                       |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874 sudo cat                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874.txt                                 |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m02 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n                                                                 | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:20 UTC |
	|         | ha-610874-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-610874 ssh -n ha-610874-m03 sudo cat                                          | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:20 UTC | 11 Oct 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-610874 node stop m02 -v=7                                                     | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-610874 node start m02 -v=7                                                    | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-610874 -v=7                                                           | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-610874 -v=7                                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-610874 --wait=true -v=7                                                    | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:25 UTC | 11 Oct 24 21:29 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-610874                                                                | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:29 UTC |                     |
	| node    | ha-610874 node delete m03 -v=7                                                   | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:29 UTC | 11 Oct 24 21:29 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-610874 stop -v=7                                                              | ha-610874 | jenkins | v1.34.0 | 11 Oct 24 21:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:25:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:25:41.865984   35758 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:25:41.866075   35758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:25:41.866083   35758 out.go:358] Setting ErrFile to fd 2...
	I1011 21:25:41.866087   35758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:25:41.866257   35758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:25:41.866948   35758 out.go:352] Setting JSON to false
	I1011 21:25:41.867818   35758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4087,"bootTime":1728677855,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:25:41.867947   35758 start.go:139] virtualization: kvm guest
	I1011 21:25:41.870106   35758 out.go:177] * [ha-610874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:25:41.871650   35758 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:25:41.871688   35758 notify.go:220] Checking for updates...
	I1011 21:25:41.873898   35758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:25:41.875077   35758 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:25:41.876222   35758 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:25:41.877352   35758 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:25:41.878470   35758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:25:41.879772   35758 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:25:41.879869   35758 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:25:41.880321   35758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:25:41.880362   35758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:25:41.895047   35758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I1011 21:25:41.895555   35758 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:25:41.896048   35758 main.go:141] libmachine: Using API Version  1
	I1011 21:25:41.896067   35758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:25:41.896359   35758 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:25:41.896536   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:25:41.932372   35758 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 21:25:41.933519   35758 start.go:297] selected driver: kvm2
	I1011 21:25:41.933533   35758 start.go:901] validating driver "kvm2" against &{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:25:41.933659   35758 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:25:41.933998   35758 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:25:41.934071   35758 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:25:41.947927   35758 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:25:41.948568   35758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:25:41.948604   35758 cni.go:84] Creating CNI manager for ""
	I1011 21:25:41.948658   35758 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1011 21:25:41.948723   35758 start.go:340] cluster config:
	{Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:25:41.948867   35758 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:25:41.951013   35758 out.go:177] * Starting "ha-610874" primary control-plane node in "ha-610874" cluster
	I1011 21:25:41.951940   35758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:25:41.951972   35758 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:25:41.951984   35758 cache.go:56] Caching tarball of preloaded images
	I1011 21:25:41.952069   35758 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:25:41.952080   35758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:25:41.952206   35758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/config.json ...
	I1011 21:25:41.952414   35758 start.go:360] acquireMachinesLock for ha-610874: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:25:41.952468   35758 start.go:364] duration metric: took 36.13µs to acquireMachinesLock for "ha-610874"
	I1011 21:25:41.952487   35758 start.go:96] Skipping create...Using existing machine configuration
	I1011 21:25:41.952496   35758 fix.go:54] fixHost starting: 
	I1011 21:25:41.952757   35758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:25:41.952794   35758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:25:41.966359   35758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I1011 21:25:41.966873   35758 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:25:41.967368   35758 main.go:141] libmachine: Using API Version  1
	I1011 21:25:41.967384   35758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:25:41.967706   35758 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:25:41.967862   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:25:41.968023   35758 main.go:141] libmachine: (ha-610874) Calling .GetState
	I1011 21:25:41.969605   35758 fix.go:112] recreateIfNeeded on ha-610874: state=Running err=<nil>
	W1011 21:25:41.969629   35758 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 21:25:41.971228   35758 out.go:177] * Updating the running kvm2 "ha-610874" VM ...
	I1011 21:25:41.972244   35758 machine.go:93] provisionDockerMachine start ...
	I1011 21:25:41.972259   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:25:41.972427   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:41.974586   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:41.975012   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:41.975034   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:41.975145   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:41.975273   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:41.975414   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:41.975546   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:41.975687   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:41.975860   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:41.975871   35758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 21:25:42.092032   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:25:42.092065   35758 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:25:42.092286   35758 buildroot.go:166] provisioning hostname "ha-610874"
	I1011 21:25:42.092314   35758 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:25:42.092494   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.095150   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.095535   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.095569   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.095726   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.095904   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.096062   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.096178   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.096351   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:42.096558   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:42.096572   35758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-610874 && echo "ha-610874" | sudo tee /etc/hostname
	I1011 21:25:42.227142   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-610874
	
	I1011 21:25:42.227164   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.229708   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.230053   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.230074   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.230237   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.230406   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.230574   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.230704   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.230817   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:42.230980   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:42.230994   35758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-610874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-610874/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-610874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:25:42.347624   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:25:42.347648   35758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:25:42.347663   35758 buildroot.go:174] setting up certificates
	I1011 21:25:42.347674   35758 provision.go:84] configureAuth start
	I1011 21:25:42.347684   35758 main.go:141] libmachine: (ha-610874) Calling .GetMachineName
	I1011 21:25:42.347954   35758 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:25:42.350461   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.350857   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.350876   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.351080   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.353537   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.353857   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.353888   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.353987   35758 provision.go:143] copyHostCerts
	I1011 21:25:42.354010   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:25:42.354045   35758 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:25:42.354058   35758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:25:42.354125   35758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:25:42.354191   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:25:42.354219   35758 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:25:42.354226   35758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:25:42.354249   35758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:25:42.354292   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:25:42.354308   35758 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:25:42.354314   35758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:25:42.354336   35758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:25:42.354390   35758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.ha-610874 san=[127.0.0.1 192.168.39.10 ha-610874 localhost minikube]
	I1011 21:25:42.400146   35758 provision.go:177] copyRemoteCerts
	I1011 21:25:42.400197   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:25:42.400222   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.402685   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.403052   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.403075   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.403219   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.403394   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.403529   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.403630   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:25:42.494067   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:25:42.494141   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:25:42.520514   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:25:42.520596   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1011 21:25:42.546929   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:25:42.547006   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 21:25:42.571540   35758 provision.go:87] duration metric: took 223.855512ms to configureAuth
	I1011 21:25:42.571564   35758 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:25:42.571804   35758 config.go:182] Loaded profile config "ha-610874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:25:42.571891   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:25:42.574450   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.574909   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:25:42.574937   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:25:42.575121   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:25:42.575321   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.575480   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:25:42.575623   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:25:42.575786   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:25:42.575983   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:25:42.576001   35758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:27:13.397650   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:27:13.397675   35758 machine.go:96] duration metric: took 1m31.425420131s to provisionDockerMachine
	I1011 21:27:13.397689   35758 start.go:293] postStartSetup for "ha-610874" (driver="kvm2")
	I1011 21:27:13.397703   35758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:27:13.397744   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.398084   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:27:13.398117   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.401442   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.401908   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.401926   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.402080   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.402286   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.402448   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.402556   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:27:13.489647   35758 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:27:13.493850   35758 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:27:13.493872   35758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:27:13.493934   35758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:27:13.494011   35758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:27:13.494020   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:27:13.494109   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:27:13.503176   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:27:13.530740   35758 start.go:296] duration metric: took 133.036947ms for postStartSetup
	I1011 21:27:13.530781   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.531040   35758 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1011 21:27:13.531061   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.533565   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.534021   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.534062   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.534214   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.534418   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.534662   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.534845   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	W1011 21:27:13.621467   35758 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1011 21:27:13.621491   35758 fix.go:56] duration metric: took 1m31.668996621s for fixHost
	I1011 21:27:13.621511   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.624463   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.624827   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.624867   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.625044   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.625249   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.625383   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.625631   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.625819   35758 main.go:141] libmachine: Using SSH client type: native
	I1011 21:27:13.625988   35758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1011 21:27:13.625998   35758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:27:13.739820   35758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728682033.695405341
	
	I1011 21:27:13.739846   35758 fix.go:216] guest clock: 1728682033.695405341
	I1011 21:27:13.739855   35758 fix.go:229] Guest: 2024-10-11 21:27:13.695405341 +0000 UTC Remote: 2024-10-11 21:27:13.621498056 +0000 UTC m=+91.792554019 (delta=73.907285ms)
	I1011 21:27:13.739874   35758 fix.go:200] guest clock delta is within tolerance: 73.907285ms
	I1011 21:27:13.739879   35758 start.go:83] releasing machines lock for "ha-610874", held for 1m31.787400304s
	I1011 21:27:13.739897   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.740136   35758 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:27:13.743172   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.743570   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.743605   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.743755   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.744239   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.744385   35758 main.go:141] libmachine: (ha-610874) Calling .DriverName
	I1011 21:27:13.744500   35758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:27:13.744544   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.744582   35758 ssh_runner.go:195] Run: cat /version.json
	I1011 21:27:13.744604   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHHostname
	I1011 21:27:13.747134   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747374   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747591   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.747620   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747775   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.747814   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:13.747841   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:13.747939   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.748019   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHPort
	I1011 21:27:13.748147   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.748151   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHKeyPath
	I1011 21:27:13.748269   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:27:13.748330   35758 main.go:141] libmachine: (ha-610874) Calling .GetSSHUsername
	I1011 21:27:13.748472   35758 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/ha-610874/id_rsa Username:docker}
	I1011 21:27:13.851227   35758 ssh_runner.go:195] Run: systemctl --version
	I1011 21:27:13.857217   35758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:27:14.027859   35758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 21:27:14.033636   35758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:27:14.033703   35758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:27:14.042533   35758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1011 21:27:14.042555   35758 start.go:495] detecting cgroup driver to use...
	I1011 21:27:14.042608   35758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:27:14.058271   35758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:27:14.071785   35758 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:27:14.071833   35758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:27:14.084811   35758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:27:14.097838   35758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:27:14.274228   35758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:27:14.433674   35758 docker.go:233] disabling docker service ...
	I1011 21:27:14.433753   35758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:27:14.453879   35758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:27:14.467983   35758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:27:14.634387   35758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:27:14.778261   35758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:27:14.792975   35758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:27:14.811746   35758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:27:14.811817   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.822547   35758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:27:14.822610   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.835095   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.847123   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.859235   35758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:27:14.871534   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.883686   35758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.894338   35758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:27:14.905153   35758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:27:14.915082   35758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:27:14.924895   35758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:27:15.066392   35758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:27:19.357484   35758 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.291053564s)
	I1011 21:27:19.357514   35758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:27:19.357565   35758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:27:19.363008   35758 start.go:563] Will wait 60s for crictl version
	I1011 21:27:19.363051   35758 ssh_runner.go:195] Run: which crictl
	I1011 21:27:19.366764   35758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:27:19.411413   35758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:27:19.411497   35758 ssh_runner.go:195] Run: crio --version
	I1011 21:27:19.440951   35758 ssh_runner.go:195] Run: crio --version
	I1011 21:27:19.472014   35758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:27:19.473444   35758 main.go:141] libmachine: (ha-610874) Calling .GetIP
	I1011 21:27:19.476141   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:19.476627   35758 main.go:141] libmachine: (ha-610874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c7:da", ip: ""} in network mk-ha-610874: {Iface:virbr1 ExpiryTime:2024-10-11 22:16:31 +0000 UTC Type:0 Mac:52:54:00:5f:c7:da Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:ha-610874 Clientid:01:52:54:00:5f:c7:da}
	I1011 21:27:19.476653   35758 main.go:141] libmachine: (ha-610874) DBG | domain ha-610874 has defined IP address 192.168.39.10 and MAC address 52:54:00:5f:c7:da in network mk-ha-610874
	I1011 21:27:19.476828   35758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:27:19.481431   35758 kubeadm.go:883] updating cluster {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:27:19.481562   35758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:27:19.481605   35758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:27:19.525052   35758 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:27:19.525070   35758 crio.go:433] Images already preloaded, skipping extraction
	I1011 21:27:19.525119   35758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:27:19.560180   35758 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:27:19.560210   35758 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:27:19.560221   35758 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1011 21:27:19.560312   35758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-610874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:27:19.560389   35758 ssh_runner.go:195] Run: crio config
	I1011 21:27:19.618816   35758 cni.go:84] Creating CNI manager for ""
	I1011 21:27:19.618837   35758 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1011 21:27:19.618847   35758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:27:19.618873   35758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-610874 NodeName:ha-610874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:27:19.618992   35758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-610874"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:27:19.619012   35758 kube-vip.go:115] generating kube-vip config ...
	I1011 21:27:19.619050   35758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1011 21:27:19.631210   35758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1011 21:27:19.631329   35758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1011 21:27:19.631388   35758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:27:19.641233   35758 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:27:19.641292   35758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1011 21:27:19.651185   35758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1011 21:27:19.667772   35758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:27:19.683352   35758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1011 21:27:19.698816   35758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1011 21:27:19.716977   35758 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1011 21:27:19.720864   35758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:27:19.860158   35758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:27:19.875764   35758 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874 for IP: 192.168.39.10
	I1011 21:27:19.875785   35758 certs.go:194] generating shared ca certs ...
	I1011 21:27:19.875800   35758 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:27:19.875948   35758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:27:19.876005   35758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:27:19.876018   35758 certs.go:256] generating profile certs ...
	I1011 21:27:19.876122   35758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/client.key
	I1011 21:27:19.876155   35758 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2
	I1011 21:27:19.876186   35758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10 192.168.39.11 192.168.39.222 192.168.39.254]
	I1011 21:27:19.975371   35758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2 ...
	I1011 21:27:19.975398   35758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2: {Name:mk8f34b9b908e3bae8427d3296dba7b7258c76a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:27:19.975593   35758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2 ...
	I1011 21:27:19.975611   35758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2: {Name:mkeb1e23517da252f1fb5610dc6482f7a2201a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:27:19.975703   35758 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt.1f2b8ed2 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt
	I1011 21:27:19.975883   35758 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key.1f2b8ed2 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key
	I1011 21:27:19.976058   35758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key
	I1011 21:27:19.976076   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:27:19.976094   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:27:19.976114   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:27:19.976132   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:27:19.976149   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:27:19.976163   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:27:19.976193   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:27:19.976212   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:27:19.976292   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:27:19.976342   35758 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:27:19.976355   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:27:19.976383   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:27:19.976421   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:27:19.976453   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:27:19.976509   35758 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:27:19.976544   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:19.976587   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:27:19.976607   35758 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:27:19.977223   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:27:20.003537   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:27:20.026472   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:27:20.049398   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:27:20.072982   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 21:27:20.095721   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:27:20.118642   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:27:20.141400   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/ha-610874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:27:20.164448   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:27:20.187900   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:27:20.262658   35758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:27:20.469642   35758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:27:20.551917   35758 ssh_runner.go:195] Run: openssl version
	I1011 21:27:20.566260   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:27:20.651044   35758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:27:20.693393   35758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:27:20.693469   35758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:27:20.742057   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:27:20.816298   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:27:20.893531   35758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:20.910194   35758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:20.910257   35758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:27:20.919111   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:27:20.944568   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:27:21.023507   35758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:27:21.042361   35758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:27:21.042432   35758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:27:21.054420   35758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:27:21.075798   35758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:27:21.080371   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 21:27:21.088087   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 21:27:21.097563   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 21:27:21.106330   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 21:27:21.115047   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 21:27:21.120960   35758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 21:27:21.131039   35758 kubeadm.go:392] StartCluster: {Name:ha-610874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-610874 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:27:21.131205   35758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:27:21.131271   35758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:27:21.209890   35758 cri.go:89] found id: "8fc935896e7caf165806222d235c011d17bea7f8ecadaa3a8ec11d0dd508c62d"
	I1011 21:27:21.209908   35758 cri.go:89] found id: "8a8fa63b97fe4d909c8f0a382b18cefe67c929962963cece0bb6c2da1b8395ba"
	I1011 21:27:21.209912   35758 cri.go:89] found id: "5f196aa4e7ce7a6d72460018492de36bc0daa02e71cfd4b00e217a95db939af8"
	I1011 21:27:21.209916   35758 cri.go:89] found id: "bca972460125e08a34c49894b1ac27e42e558c7a79f29bf4d96d8f79828b3e15"
	I1011 21:27:21.209918   35758 cri.go:89] found id: "c94234ade47686228babd59e946ecb1b71d35e2f394f3890d23489db5cd3d710"
	I1011 21:27:21.209922   35758 cri.go:89] found id: "ac474faa2108e43e4d5c0be813e9b95327575ad45d2fedfdf013c83b9372af19"
	I1011 21:27:21.209924   35758 cri.go:89] found id: "73ccd09cbaf150f3ef0e0481e2c7ce6f57c07496a0d181d24281fd0ece093fe9"
	I1011 21:27:21.209926   35758 cri.go:89] found id: "0aaa9aaea4517ddc411226242274f70147ec7288f683d8f0e48d094689650332"
	I1011 21:27:21.209937   35758 cri.go:89] found id: "add7da026dcc432e1d6eb3b92ad0bc0e811323a99ffb76ab3bd58f21184173e6"
	I1011 21:27:21.209941   35758 cri.go:89] found id: "f6f79107165988c1ea5d41454b6e8292c035da472ced97a639614f35213e4fbb"
	I1011 21:27:21.209953   35758 cri.go:89] found id: "01564ba5bc1e8348535ecfce1665a0483722808b46999831d96f14bd02f6c536"
	I1011 21:27:21.209959   35758 cri.go:89] found id: "9d5b2015aad60f411567956ee64433709e634951dee64e29066b98759fdca952"
	I1011 21:27:21.209961   35758 cri.go:89] found id: "4af1bc183cfbe3ab21a91f41a5065722589a152a1a54e475009a5f2a73be708b"
	I1011 21:27:21.209964   35758 cri.go:89] found id: "7009deb3ff5ef8186521445d47345bbbc4cc287e3c8d90af64a91f9c2b05b07d"
	I1011 21:27:21.209968   35758 cri.go:89] found id: "1bb0907534c8f1803de8690ee45796589bbcde16d6649dc50f696544a0150865"
	I1011 21:27:21.209973   35758 cri.go:89] found id: "093fe14b91d96e3d1fe6307fc7bffda9eea3defc55c18d557df6f9e1b1226d94"
	I1011 21:27:21.209976   35758 cri.go:89] found id: "b6a994e3f4bd91647db1b468f7e051276bb32bbd74e0ba8e74f00fd5a8f1d948"
	I1011 21:27:21.209980   35758 cri.go:89] found id: "1cf13112be94fb78ba1f84336198913ab3539cd3238cead6076ecc103df9008a"
	I1011 21:27:21.209984   35758 cri.go:89] found id: ""
	I1011 21:27:21.210021   35758 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-610874 -n ha-610874
helpers_test.go:261: (dbg) Run:  kubectl --context ha-610874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-805849
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-805849
E1011 21:47:06.383961   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-805849: exit status 82 (2m1.854807508s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-805849-m03"  ...
	* Stopping node "multinode-805849-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-805849" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-805849 --wait=true -v=8 --alsologtostderr
E1011 21:50:24.492620   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-805849 --wait=true -v=8 --alsologtostderr: (3m21.612595183s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-805849
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-805849 -n multinode-805849
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 logs -n 25: (2.009065107s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3100717761/001/cp-test_multinode-805849-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849:/home/docker/cp-test_multinode-805849-m02_multinode-805849.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849 sudo cat                                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m02_multinode-805849.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03:/home/docker/cp-test_multinode-805849-m02_multinode-805849-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849-m03 sudo cat                                   | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m02_multinode-805849-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp testdata/cp-test.txt                                                | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3100717761/001/cp-test_multinode-805849-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849:/home/docker/cp-test_multinode-805849-m03_multinode-805849.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849 sudo cat                                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m03_multinode-805849.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02:/home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849-m02 sudo cat                                   | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-805849 node stop m03                                                          | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	| node    | multinode-805849 node start                                                             | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:46 UTC |                     |
	| stop    | -p multinode-805849                                                                     | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:46 UTC |                     |
	| start   | -p multinode-805849                                                                     | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:48:30.892131   47456 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:48:30.892251   47456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:48:30.892260   47456 out.go:358] Setting ErrFile to fd 2...
	I1011 21:48:30.892265   47456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:48:30.892444   47456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:48:30.892943   47456 out.go:352] Setting JSON to false
	I1011 21:48:30.893784   47456 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5456,"bootTime":1728677855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:48:30.893877   47456 start.go:139] virtualization: kvm guest
	I1011 21:48:30.896118   47456 out.go:177] * [multinode-805849] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:48:30.897440   47456 notify.go:220] Checking for updates...
	I1011 21:48:30.897445   47456 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:48:30.899321   47456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:48:30.901232   47456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:48:30.902579   47456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:48:30.903605   47456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:48:30.904778   47456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:48:30.906672   47456 config.go:182] Loaded profile config "multinode-805849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:48:30.906796   47456 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:48:30.907464   47456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:48:30.907528   47456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:48:30.922938   47456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I1011 21:48:30.923467   47456 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:48:30.923953   47456 main.go:141] libmachine: Using API Version  1
	I1011 21:48:30.923971   47456 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:48:30.924274   47456 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:48:30.924462   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:48:30.959590   47456 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 21:48:30.960927   47456 start.go:297] selected driver: kvm2
	I1011 21:48:30.960938   47456 start.go:901] validating driver "kvm2" against &{Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:48:30.961075   47456 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:48:30.961419   47456 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:48:30.961493   47456 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:48:30.975675   47456 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:48:30.976300   47456 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:48:30.976333   47456 cni.go:84] Creating CNI manager for ""
	I1011 21:48:30.976383   47456 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1011 21:48:30.976437   47456 start.go:340] cluster config:
	{Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-805849 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:48:30.976568   47456 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:48:30.979087   47456 out.go:177] * Starting "multinode-805849" primary control-plane node in "multinode-805849" cluster
	I1011 21:48:30.980221   47456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:48:30.980260   47456 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:48:30.980272   47456 cache.go:56] Caching tarball of preloaded images
	I1011 21:48:30.980363   47456 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:48:30.980375   47456 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:48:30.980515   47456 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/config.json ...
	I1011 21:48:30.980725   47456 start.go:360] acquireMachinesLock for multinode-805849: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:48:30.980780   47456 start.go:364] duration metric: took 35.125µs to acquireMachinesLock for "multinode-805849"
	I1011 21:48:30.980799   47456 start.go:96] Skipping create...Using existing machine configuration
	I1011 21:48:30.980809   47456 fix.go:54] fixHost starting: 
	I1011 21:48:30.981093   47456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:48:30.981132   47456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:48:30.995898   47456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1011 21:48:30.996358   47456 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:48:30.996814   47456 main.go:141] libmachine: Using API Version  1
	I1011 21:48:30.996834   47456 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:48:30.997112   47456 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:48:30.997258   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:48:30.997394   47456 main.go:141] libmachine: (multinode-805849) Calling .GetState
	I1011 21:48:30.998824   47456 fix.go:112] recreateIfNeeded on multinode-805849: state=Running err=<nil>
	W1011 21:48:30.998841   47456 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 21:48:31.000550   47456 out.go:177] * Updating the running kvm2 "multinode-805849" VM ...
	I1011 21:48:31.001739   47456 machine.go:93] provisionDockerMachine start ...
	I1011 21:48:31.001756   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:48:31.001927   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.004023   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.004424   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.004446   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.004612   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.004759   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.004902   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.005029   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.005196   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.005377   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.005387   47456 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 21:48:31.119569   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-805849
	
	I1011 21:48:31.119604   47456 main.go:141] libmachine: (multinode-805849) Calling .GetMachineName
	I1011 21:48:31.119821   47456 buildroot.go:166] provisioning hostname "multinode-805849"
	I1011 21:48:31.119844   47456 main.go:141] libmachine: (multinode-805849) Calling .GetMachineName
	I1011 21:48:31.120047   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.122547   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.122932   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.122969   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.123061   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.123213   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.123373   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.123457   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.123598   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.123745   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.123756   47456 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-805849 && echo "multinode-805849" | sudo tee /etc/hostname
	I1011 21:48:31.251593   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-805849
	
	I1011 21:48:31.251626   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.254344   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.254747   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.254779   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.254942   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.255115   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.255242   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.255373   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.255497   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.255667   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.255689   47456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-805849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-805849/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-805849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:48:31.367503   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:48:31.367537   47456 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:48:31.367557   47456 buildroot.go:174] setting up certificates
	I1011 21:48:31.367565   47456 provision.go:84] configureAuth start
	I1011 21:48:31.367574   47456 main.go:141] libmachine: (multinode-805849) Calling .GetMachineName
	I1011 21:48:31.367868   47456 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:48:31.370590   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.370988   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.371005   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.371178   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.373219   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.373562   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.373600   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.373768   47456 provision.go:143] copyHostCerts
	I1011 21:48:31.373807   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:48:31.373838   47456 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:48:31.373848   47456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:48:31.373937   47456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:48:31.374035   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:48:31.374058   47456 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:48:31.374066   47456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:48:31.374096   47456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:48:31.374157   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:48:31.374176   47456 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:48:31.374183   47456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:48:31.374209   47456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:48:31.374260   47456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.multinode-805849 san=[127.0.0.1 192.168.39.81 localhost minikube multinode-805849]
	I1011 21:48:31.616047   47456 provision.go:177] copyRemoteCerts
	I1011 21:48:31.616106   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:48:31.616131   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.618721   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.619059   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.619081   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.619354   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.619548   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.619705   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.619821   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:48:31.708292   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:48:31.708361   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:48:31.738460   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:48:31.738532   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1011 21:48:31.762620   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:48:31.762695   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:48:31.788411   47456 provision.go:87] duration metric: took 420.835056ms to configureAuth
	I1011 21:48:31.788436   47456 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:48:31.788648   47456 config.go:182] Loaded profile config "multinode-805849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:48:31.788727   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.791822   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.792319   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.792344   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.792520   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.792667   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.792861   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.793015   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.793169   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.793325   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.793338   47456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:50:02.701475   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:50:02.701513   47456 machine.go:96] duration metric: took 1m31.699762359s to provisionDockerMachine
	I1011 21:50:02.701543   47456 start.go:293] postStartSetup for "multinode-805849" (driver="kvm2")
	I1011 21:50:02.701567   47456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:50:02.701600   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.701974   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:50:02.702012   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.705603   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.706053   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.706085   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.706259   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.706469   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.706607   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.706764   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:50:02.794924   47456 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:50:02.799421   47456 command_runner.go:130] > NAME=Buildroot
	I1011 21:50:02.799443   47456 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1011 21:50:02.799448   47456 command_runner.go:130] > ID=buildroot
	I1011 21:50:02.799453   47456 command_runner.go:130] > VERSION_ID=2023.02.9
	I1011 21:50:02.799459   47456 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1011 21:50:02.799503   47456 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:50:02.799524   47456 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:50:02.799614   47456 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:50:02.799725   47456 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:50:02.799738   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:50:02.799844   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:50:02.809979   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:50:02.836413   47456 start.go:296] duration metric: took 134.853741ms for postStartSetup
	I1011 21:50:02.836491   47456 fix.go:56] duration metric: took 1m31.855682436s for fixHost
	I1011 21:50:02.836526   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.839362   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.839705   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.839732   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.839913   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.840098   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.840241   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.840415   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.840610   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:50:02.840809   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:50:02.840821   47456 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:50:02.951896   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728683402.926758339
	
	I1011 21:50:02.951917   47456 fix.go:216] guest clock: 1728683402.926758339
	I1011 21:50:02.951923   47456 fix.go:229] Guest: 2024-10-11 21:50:02.926758339 +0000 UTC Remote: 2024-10-11 21:50:02.836496536 +0000 UTC m=+91.982626614 (delta=90.261803ms)
	I1011 21:50:02.951980   47456 fix.go:200] guest clock delta is within tolerance: 90.261803ms
	I1011 21:50:02.951988   47456 start.go:83] releasing machines lock for "multinode-805849", held for 1m31.971196448s
	I1011 21:50:02.952025   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.952302   47456 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:50:02.955292   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.955631   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.955654   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.955879   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.956432   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.956624   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.956722   47456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:50:02.956780   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.956841   47456 ssh_runner.go:195] Run: cat /version.json
	I1011 21:50:02.956863   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.959832   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960009   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960207   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.960237   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960405   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.960554   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.960577   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.960579   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960685   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.960757   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.960842   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.960892   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:50:02.960961   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.961086   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:50:03.043861   47456 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1011 21:50:03.062991   47456 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1011 21:50:03.063850   47456 ssh_runner.go:195] Run: systemctl --version
	I1011 21:50:03.070388   47456 command_runner.go:130] > systemd 252 (252)
	I1011 21:50:03.070425   47456 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1011 21:50:03.070609   47456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:50:03.234510   47456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 21:50:03.243291   47456 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1011 21:50:03.243678   47456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:50:03.243745   47456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:50:03.254489   47456 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1011 21:50:03.254516   47456 start.go:495] detecting cgroup driver to use...
	I1011 21:50:03.254587   47456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:50:03.272170   47456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:50:03.287786   47456 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:50:03.287850   47456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:50:03.302993   47456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:50:03.317766   47456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:50:03.467526   47456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:50:03.618083   47456 docker.go:233] disabling docker service ...
	I1011 21:50:03.618154   47456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:50:03.637059   47456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:50:03.651919   47456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:50:03.814765   47456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:50:03.970836   47456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:50:03.985685   47456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:50:04.006288   47456 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1011 21:50:04.006336   47456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:50:04.006392   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.018003   47456 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:50:04.018062   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.029205   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.040152   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.051395   47456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:50:04.062857   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.075054   47456 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.087655   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.098878   47456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:50:04.109155   47456 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1011 21:50:04.109234   47456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:50:04.119456   47456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:50:04.258845   47456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:50:04.482203   47456 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:50:04.482285   47456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:50:04.487700   47456 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1011 21:50:04.487730   47456 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1011 21:50:04.487739   47456 command_runner.go:130] > Device: 0,22	Inode: 1299        Links: 1
	I1011 21:50:04.487749   47456 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1011 21:50:04.487757   47456 command_runner.go:130] > Access: 2024-10-11 21:50:04.332107543 +0000
	I1011 21:50:04.487766   47456 command_runner.go:130] > Modify: 2024-10-11 21:50:04.332107543 +0000
	I1011 21:50:04.487772   47456 command_runner.go:130] > Change: 2024-10-11 21:50:04.332107543 +0000
	I1011 21:50:04.487778   47456 command_runner.go:130] >  Birth: -
	I1011 21:50:04.487799   47456 start.go:563] Will wait 60s for crictl version
	I1011 21:50:04.487870   47456 ssh_runner.go:195] Run: which crictl
	I1011 21:50:04.492207   47456 command_runner.go:130] > /usr/bin/crictl
	I1011 21:50:04.492297   47456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:50:04.529861   47456 command_runner.go:130] > Version:  0.1.0
	I1011 21:50:04.529896   47456 command_runner.go:130] > RuntimeName:  cri-o
	I1011 21:50:04.529905   47456 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1011 21:50:04.529912   47456 command_runner.go:130] > RuntimeApiVersion:  v1
	I1011 21:50:04.529931   47456 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:50:04.529993   47456 ssh_runner.go:195] Run: crio --version
	I1011 21:50:04.559696   47456 command_runner.go:130] > crio version 1.29.1
	I1011 21:50:04.559721   47456 command_runner.go:130] > Version:        1.29.1
	I1011 21:50:04.559731   47456 command_runner.go:130] > GitCommit:      unknown
	I1011 21:50:04.559738   47456 command_runner.go:130] > GitCommitDate:  unknown
	I1011 21:50:04.559746   47456 command_runner.go:130] > GitTreeState:   clean
	I1011 21:50:04.559752   47456 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1011 21:50:04.559756   47456 command_runner.go:130] > GoVersion:      go1.21.6
	I1011 21:50:04.559760   47456 command_runner.go:130] > Compiler:       gc
	I1011 21:50:04.559765   47456 command_runner.go:130] > Platform:       linux/amd64
	I1011 21:50:04.559769   47456 command_runner.go:130] > Linkmode:       dynamic
	I1011 21:50:04.559773   47456 command_runner.go:130] > BuildTags:      
	I1011 21:50:04.559777   47456 command_runner.go:130] >   containers_image_ostree_stub
	I1011 21:50:04.559782   47456 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1011 21:50:04.559789   47456 command_runner.go:130] >   btrfs_noversion
	I1011 21:50:04.559797   47456 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1011 21:50:04.559803   47456 command_runner.go:130] >   libdm_no_deferred_remove
	I1011 21:50:04.559813   47456 command_runner.go:130] >   seccomp
	I1011 21:50:04.559820   47456 command_runner.go:130] > LDFlags:          unknown
	I1011 21:50:04.559827   47456 command_runner.go:130] > SeccompEnabled:   true
	I1011 21:50:04.559836   47456 command_runner.go:130] > AppArmorEnabled:  false
	I1011 21:50:04.561302   47456 ssh_runner.go:195] Run: crio --version
	I1011 21:50:04.590693   47456 command_runner.go:130] > crio version 1.29.1
	I1011 21:50:04.590719   47456 command_runner.go:130] > Version:        1.29.1
	I1011 21:50:04.590730   47456 command_runner.go:130] > GitCommit:      unknown
	I1011 21:50:04.590736   47456 command_runner.go:130] > GitCommitDate:  unknown
	I1011 21:50:04.590742   47456 command_runner.go:130] > GitTreeState:   clean
	I1011 21:50:04.590749   47456 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1011 21:50:04.590754   47456 command_runner.go:130] > GoVersion:      go1.21.6
	I1011 21:50:04.590760   47456 command_runner.go:130] > Compiler:       gc
	I1011 21:50:04.590767   47456 command_runner.go:130] > Platform:       linux/amd64
	I1011 21:50:04.590772   47456 command_runner.go:130] > Linkmode:       dynamic
	I1011 21:50:04.590778   47456 command_runner.go:130] > BuildTags:      
	I1011 21:50:04.590785   47456 command_runner.go:130] >   containers_image_ostree_stub
	I1011 21:50:04.590792   47456 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1011 21:50:04.590802   47456 command_runner.go:130] >   btrfs_noversion
	I1011 21:50:04.590810   47456 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1011 21:50:04.590817   47456 command_runner.go:130] >   libdm_no_deferred_remove
	I1011 21:50:04.590823   47456 command_runner.go:130] >   seccomp
	I1011 21:50:04.590831   47456 command_runner.go:130] > LDFlags:          unknown
	I1011 21:50:04.590838   47456 command_runner.go:130] > SeccompEnabled:   true
	I1011 21:50:04.590845   47456 command_runner.go:130] > AppArmorEnabled:  false
	I1011 21:50:04.594466   47456 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:50:04.595750   47456 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:50:04.598340   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:04.598679   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:04.598715   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:04.598921   47456 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:50:04.603762   47456 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1011 21:50:04.603882   47456 kubeadm.go:883] updating cluster {Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:50:04.604131   47456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:50:04.604210   47456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:50:04.654051   47456 command_runner.go:130] > {
	I1011 21:50:04.654085   47456 command_runner.go:130] >   "images": [
	I1011 21:50:04.654092   47456 command_runner.go:130] >     {
	I1011 21:50:04.654104   47456 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1011 21:50:04.654111   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654121   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1011 21:50:04.654131   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654138   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654157   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1011 21:50:04.654177   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1011 21:50:04.654184   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654192   47456 command_runner.go:130] >       "size": "87190579",
	I1011 21:50:04.654201   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654213   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654226   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654237   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654244   47456 command_runner.go:130] >     },
	I1011 21:50:04.654254   47456 command_runner.go:130] >     {
	I1011 21:50:04.654271   47456 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1011 21:50:04.654283   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654295   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1011 21:50:04.654304   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654314   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654331   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1011 21:50:04.654373   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1011 21:50:04.654389   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654393   47456 command_runner.go:130] >       "size": "94965812",
	I1011 21:50:04.654401   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654409   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654416   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654420   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654426   47456 command_runner.go:130] >     },
	I1011 21:50:04.654430   47456 command_runner.go:130] >     {
	I1011 21:50:04.654436   47456 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1011 21:50:04.654444   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654452   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1011 21:50:04.654456   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654463   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654471   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1011 21:50:04.654481   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1011 21:50:04.654487   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654492   47456 command_runner.go:130] >       "size": "1363676",
	I1011 21:50:04.654499   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654503   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654509   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654514   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654518   47456 command_runner.go:130] >     },
	I1011 21:50:04.654524   47456 command_runner.go:130] >     {
	I1011 21:50:04.654530   47456 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1011 21:50:04.654536   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654542   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1011 21:50:04.654549   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654553   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654563   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1011 21:50:04.654576   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1011 21:50:04.654582   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654587   47456 command_runner.go:130] >       "size": "31470524",
	I1011 21:50:04.654594   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654599   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654605   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654609   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654638   47456 command_runner.go:130] >     },
	I1011 21:50:04.654648   47456 command_runner.go:130] >     {
	I1011 21:50:04.654658   47456 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1011 21:50:04.654668   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654677   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1011 21:50:04.654684   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654690   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654700   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1011 21:50:04.654710   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1011 21:50:04.654716   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654721   47456 command_runner.go:130] >       "size": "63273227",
	I1011 21:50:04.654728   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654733   47456 command_runner.go:130] >       "username": "nonroot",
	I1011 21:50:04.654739   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654743   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654749   47456 command_runner.go:130] >     },
	I1011 21:50:04.654753   47456 command_runner.go:130] >     {
	I1011 21:50:04.654761   47456 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1011 21:50:04.654766   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654772   47456 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1011 21:50:04.654776   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654780   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654790   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1011 21:50:04.654799   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1011 21:50:04.654805   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654809   47456 command_runner.go:130] >       "size": "149009664",
	I1011 21:50:04.654817   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.654824   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.654828   47456 command_runner.go:130] >       },
	I1011 21:50:04.654834   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654838   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654844   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654848   47456 command_runner.go:130] >     },
	I1011 21:50:04.654852   47456 command_runner.go:130] >     {
	I1011 21:50:04.654861   47456 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1011 21:50:04.654868   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654873   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1011 21:50:04.654879   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654883   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654893   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1011 21:50:04.654903   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1011 21:50:04.654910   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654914   47456 command_runner.go:130] >       "size": "95237600",
	I1011 21:50:04.654920   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.654924   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.654935   47456 command_runner.go:130] >       },
	I1011 21:50:04.654942   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654952   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654963   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654972   47456 command_runner.go:130] >     },
	I1011 21:50:04.654982   47456 command_runner.go:130] >     {
	I1011 21:50:04.654996   47456 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1011 21:50:04.655006   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655019   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1011 21:50:04.655030   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655040   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655059   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1011 21:50:04.655070   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1011 21:50:04.655077   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655081   47456 command_runner.go:130] >       "size": "89437508",
	I1011 21:50:04.655087   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.655092   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.655098   47456 command_runner.go:130] >       },
	I1011 21:50:04.655102   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655107   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655111   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.655114   47456 command_runner.go:130] >     },
	I1011 21:50:04.655117   47456 command_runner.go:130] >     {
	I1011 21:50:04.655123   47456 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1011 21:50:04.655126   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655132   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1011 21:50:04.655137   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655143   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655151   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1011 21:50:04.655161   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1011 21:50:04.655167   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655172   47456 command_runner.go:130] >       "size": "92733849",
	I1011 21:50:04.655178   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.655181   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655185   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655189   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.655193   47456 command_runner.go:130] >     },
	I1011 21:50:04.655198   47456 command_runner.go:130] >     {
	I1011 21:50:04.655205   47456 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1011 21:50:04.655211   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655217   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1011 21:50:04.655223   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655227   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655236   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1011 21:50:04.655245   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1011 21:50:04.655252   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655257   47456 command_runner.go:130] >       "size": "68420934",
	I1011 21:50:04.655263   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.655267   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.655274   47456 command_runner.go:130] >       },
	I1011 21:50:04.655279   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655285   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655290   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.655297   47456 command_runner.go:130] >     },
	I1011 21:50:04.655300   47456 command_runner.go:130] >     {
	I1011 21:50:04.655308   47456 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1011 21:50:04.655315   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655320   47456 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1011 21:50:04.655326   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655331   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655340   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1011 21:50:04.655349   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1011 21:50:04.655353   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655360   47456 command_runner.go:130] >       "size": "742080",
	I1011 21:50:04.655364   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.655371   47456 command_runner.go:130] >         "value": "65535"
	I1011 21:50:04.655375   47456 command_runner.go:130] >       },
	I1011 21:50:04.655380   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655390   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655394   47456 command_runner.go:130] >       "pinned": true
	I1011 21:50:04.655400   47456 command_runner.go:130] >     }
	I1011 21:50:04.655406   47456 command_runner.go:130] >   ]
	I1011 21:50:04.655411   47456 command_runner.go:130] > }
	I1011 21:50:04.655578   47456 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:50:04.655591   47456 crio.go:433] Images already preloaded, skipping extraction
	I1011 21:50:04.655646   47456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:50:04.695803   47456 command_runner.go:130] > {
	I1011 21:50:04.695829   47456 command_runner.go:130] >   "images": [
	I1011 21:50:04.695836   47456 command_runner.go:130] >     {
	I1011 21:50:04.695847   47456 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1011 21:50:04.695853   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.695860   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1011 21:50:04.695866   47456 command_runner.go:130] >       ],
	I1011 21:50:04.695874   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.695886   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1011 21:50:04.695898   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1011 21:50:04.695909   47456 command_runner.go:130] >       ],
	I1011 21:50:04.695916   47456 command_runner.go:130] >       "size": "87190579",
	I1011 21:50:04.695921   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.695928   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.695937   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.695944   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.695950   47456 command_runner.go:130] >     },
	I1011 21:50:04.695955   47456 command_runner.go:130] >     {
	I1011 21:50:04.695965   47456 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1011 21:50:04.695971   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.695979   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1011 21:50:04.695985   47456 command_runner.go:130] >       ],
	I1011 21:50:04.695991   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696003   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1011 21:50:04.696014   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1011 21:50:04.696023   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696030   47456 command_runner.go:130] >       "size": "94965812",
	I1011 21:50:04.696038   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696052   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696061   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696071   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696077   47456 command_runner.go:130] >     },
	I1011 21:50:04.696085   47456 command_runner.go:130] >     {
	I1011 21:50:04.696097   47456 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1011 21:50:04.696104   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696109   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1011 21:50:04.696115   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696119   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696128   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1011 21:50:04.696138   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1011 21:50:04.696145   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696149   47456 command_runner.go:130] >       "size": "1363676",
	I1011 21:50:04.696156   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696162   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696179   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696185   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696189   47456 command_runner.go:130] >     },
	I1011 21:50:04.696195   47456 command_runner.go:130] >     {
	I1011 21:50:04.696201   47456 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1011 21:50:04.696207   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696213   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1011 21:50:04.696219   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696223   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696233   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1011 21:50:04.696245   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1011 21:50:04.696251   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696255   47456 command_runner.go:130] >       "size": "31470524",
	I1011 21:50:04.696261   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696265   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696271   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696275   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696280   47456 command_runner.go:130] >     },
	I1011 21:50:04.696283   47456 command_runner.go:130] >     {
	I1011 21:50:04.696291   47456 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1011 21:50:04.696297   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696302   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1011 21:50:04.696308   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696312   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696321   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1011 21:50:04.696330   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1011 21:50:04.696335   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696339   47456 command_runner.go:130] >       "size": "63273227",
	I1011 21:50:04.696344   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696349   47456 command_runner.go:130] >       "username": "nonroot",
	I1011 21:50:04.696355   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696359   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696365   47456 command_runner.go:130] >     },
	I1011 21:50:04.696370   47456 command_runner.go:130] >     {
	I1011 21:50:04.696379   47456 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1011 21:50:04.696383   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696388   47456 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1011 21:50:04.696393   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696397   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696413   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1011 21:50:04.696423   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1011 21:50:04.696427   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696430   47456 command_runner.go:130] >       "size": "149009664",
	I1011 21:50:04.696434   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696437   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696443   47456 command_runner.go:130] >       },
	I1011 21:50:04.696447   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696451   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696455   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696458   47456 command_runner.go:130] >     },
	I1011 21:50:04.696461   47456 command_runner.go:130] >     {
	I1011 21:50:04.696467   47456 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1011 21:50:04.696470   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696475   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1011 21:50:04.696478   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696482   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696489   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1011 21:50:04.696498   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1011 21:50:04.696503   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696508   47456 command_runner.go:130] >       "size": "95237600",
	I1011 21:50:04.696512   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696516   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696520   47456 command_runner.go:130] >       },
	I1011 21:50:04.696527   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696530   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696537   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696540   47456 command_runner.go:130] >     },
	I1011 21:50:04.696545   47456 command_runner.go:130] >     {
	I1011 21:50:04.696551   47456 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1011 21:50:04.696557   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696562   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1011 21:50:04.696566   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696572   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696584   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1011 21:50:04.696594   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1011 21:50:04.696599   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696604   47456 command_runner.go:130] >       "size": "89437508",
	I1011 21:50:04.696610   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696614   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696619   47456 command_runner.go:130] >       },
	I1011 21:50:04.696623   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696629   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696633   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696638   47456 command_runner.go:130] >     },
	I1011 21:50:04.696641   47456 command_runner.go:130] >     {
	I1011 21:50:04.696647   47456 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1011 21:50:04.696653   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696658   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1011 21:50:04.696664   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696668   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696677   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1011 21:50:04.696688   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1011 21:50:04.696694   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696698   47456 command_runner.go:130] >       "size": "92733849",
	I1011 21:50:04.696704   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696709   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696715   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696719   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696724   47456 command_runner.go:130] >     },
	I1011 21:50:04.696728   47456 command_runner.go:130] >     {
	I1011 21:50:04.696733   47456 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1011 21:50:04.696739   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696745   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1011 21:50:04.696750   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696754   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696763   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1011 21:50:04.696773   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1011 21:50:04.696780   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696784   47456 command_runner.go:130] >       "size": "68420934",
	I1011 21:50:04.696790   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696795   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696800   47456 command_runner.go:130] >       },
	I1011 21:50:04.696804   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696810   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696814   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696817   47456 command_runner.go:130] >     },
	I1011 21:50:04.696823   47456 command_runner.go:130] >     {
	I1011 21:50:04.696829   47456 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1011 21:50:04.696835   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696840   47456 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1011 21:50:04.696845   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696849   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696858   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1011 21:50:04.696865   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1011 21:50:04.696870   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696875   47456 command_runner.go:130] >       "size": "742080",
	I1011 21:50:04.696881   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696885   47456 command_runner.go:130] >         "value": "65535"
	I1011 21:50:04.696891   47456 command_runner.go:130] >       },
	I1011 21:50:04.696894   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696900   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696904   47456 command_runner.go:130] >       "pinned": true
	I1011 21:50:04.696909   47456 command_runner.go:130] >     }
	I1011 21:50:04.696914   47456 command_runner.go:130] >   ]
	I1011 21:50:04.696919   47456 command_runner.go:130] > }
	I1011 21:50:04.697060   47456 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:50:04.697074   47456 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:50:04.697081   47456 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.31.1 crio true true} ...
	I1011 21:50:04.697175   47456 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-805849 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:50:04.697241   47456 ssh_runner.go:195] Run: crio config
	I1011 21:50:04.740738   47456 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1011 21:50:04.740767   47456 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1011 21:50:04.740774   47456 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1011 21:50:04.740777   47456 command_runner.go:130] > #
	I1011 21:50:04.740784   47456 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1011 21:50:04.740790   47456 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1011 21:50:04.740796   47456 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1011 21:50:04.740802   47456 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1011 21:50:04.740805   47456 command_runner.go:130] > # reload'.
	I1011 21:50:04.740817   47456 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1011 21:50:04.740826   47456 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1011 21:50:04.740840   47456 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1011 21:50:04.740849   47456 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1011 21:50:04.740857   47456 command_runner.go:130] > [crio]
	I1011 21:50:04.740865   47456 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1011 21:50:04.740873   47456 command_runner.go:130] > # containers images, in this directory.
	I1011 21:50:04.741234   47456 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1011 21:50:04.741270   47456 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1011 21:50:04.741279   47456 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1011 21:50:04.741294   47456 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1011 21:50:04.741301   47456 command_runner.go:130] > # imagestore = ""
	I1011 21:50:04.741312   47456 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1011 21:50:04.741323   47456 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1011 21:50:04.741334   47456 command_runner.go:130] > storage_driver = "overlay"
	I1011 21:50:04.741343   47456 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1011 21:50:04.741355   47456 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1011 21:50:04.741363   47456 command_runner.go:130] > storage_option = [
	I1011 21:50:04.741392   47456 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1011 21:50:04.741410   47456 command_runner.go:130] > ]
	I1011 21:50:04.741418   47456 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1011 21:50:04.741443   47456 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1011 21:50:04.741455   47456 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1011 21:50:04.741467   47456 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1011 21:50:04.741476   47456 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1011 21:50:04.741482   47456 command_runner.go:130] > # always happen on a node reboot
	I1011 21:50:04.741492   47456 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1011 21:50:04.741505   47456 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1011 21:50:04.741517   47456 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1011 21:50:04.741525   47456 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1011 21:50:04.741538   47456 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1011 21:50:04.741553   47456 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1011 21:50:04.741567   47456 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1011 21:50:04.741575   47456 command_runner.go:130] > # internal_wipe = true
	I1011 21:50:04.741584   47456 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1011 21:50:04.741590   47456 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1011 21:50:04.741595   47456 command_runner.go:130] > # internal_repair = false
	I1011 21:50:04.741606   47456 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1011 21:50:04.741618   47456 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1011 21:50:04.741631   47456 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1011 21:50:04.741642   47456 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1011 21:50:04.741651   47456 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1011 21:50:04.741656   47456 command_runner.go:130] > [crio.api]
	I1011 21:50:04.741664   47456 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1011 21:50:04.741671   47456 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1011 21:50:04.741679   47456 command_runner.go:130] > # IP address on which the stream server will listen.
	I1011 21:50:04.741686   47456 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1011 21:50:04.741692   47456 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1011 21:50:04.741697   47456 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1011 21:50:04.741701   47456 command_runner.go:130] > # stream_port = "0"
	I1011 21:50:04.741706   47456 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1011 21:50:04.741713   47456 command_runner.go:130] > # stream_enable_tls = false
	I1011 21:50:04.741719   47456 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1011 21:50:04.741725   47456 command_runner.go:130] > # stream_idle_timeout = ""
	I1011 21:50:04.741735   47456 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1011 21:50:04.741747   47456 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1011 21:50:04.741757   47456 command_runner.go:130] > # minutes.
	I1011 21:50:04.741763   47456 command_runner.go:130] > # stream_tls_cert = ""
	I1011 21:50:04.741779   47456 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1011 21:50:04.741792   47456 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1011 21:50:04.741802   47456 command_runner.go:130] > # stream_tls_key = ""
	I1011 21:50:04.741811   47456 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1011 21:50:04.741824   47456 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1011 21:50:04.741839   47456 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1011 21:50:04.741848   47456 command_runner.go:130] > # stream_tls_ca = ""
	I1011 21:50:04.741860   47456 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1011 21:50:04.741871   47456 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1011 21:50:04.741886   47456 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1011 21:50:04.741897   47456 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1011 21:50:04.741910   47456 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1011 21:50:04.741919   47456 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1011 21:50:04.741923   47456 command_runner.go:130] > [crio.runtime]
	I1011 21:50:04.741929   47456 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1011 21:50:04.741940   47456 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1011 21:50:04.741950   47456 command_runner.go:130] > # "nofile=1024:2048"
	I1011 21:50:04.741960   47456 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1011 21:50:04.741969   47456 command_runner.go:130] > # default_ulimits = [
	I1011 21:50:04.741975   47456 command_runner.go:130] > # ]
	I1011 21:50:04.741987   47456 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1011 21:50:04.741996   47456 command_runner.go:130] > # no_pivot = false
	I1011 21:50:04.742005   47456 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1011 21:50:04.742013   47456 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1011 21:50:04.742017   47456 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1011 21:50:04.742025   47456 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1011 21:50:04.742029   47456 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1011 21:50:04.742037   47456 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1011 21:50:04.742041   47456 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1011 21:50:04.742045   47456 command_runner.go:130] > # Cgroup setting for conmon
	I1011 21:50:04.742051   47456 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1011 21:50:04.742059   47456 command_runner.go:130] > conmon_cgroup = "pod"
	I1011 21:50:04.742068   47456 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1011 21:50:04.742079   47456 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1011 21:50:04.742089   47456 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1011 21:50:04.742098   47456 command_runner.go:130] > conmon_env = [
	I1011 21:50:04.742108   47456 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1011 21:50:04.742121   47456 command_runner.go:130] > ]
	I1011 21:50:04.742131   47456 command_runner.go:130] > # Additional environment variables to set for all the
	I1011 21:50:04.742142   47456 command_runner.go:130] > # containers. These are overridden if set in the
	I1011 21:50:04.742155   47456 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1011 21:50:04.742162   47456 command_runner.go:130] > # default_env = [
	I1011 21:50:04.742170   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742178   47456 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1011 21:50:04.742192   47456 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1011 21:50:04.742214   47456 command_runner.go:130] > # selinux = false
	I1011 21:50:04.742224   47456 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1011 21:50:04.742234   47456 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1011 21:50:04.742246   47456 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1011 21:50:04.742254   47456 command_runner.go:130] > # seccomp_profile = ""
	I1011 21:50:04.742264   47456 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1011 21:50:04.742275   47456 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1011 21:50:04.742288   47456 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1011 21:50:04.742296   47456 command_runner.go:130] > # which might increase security.
	I1011 21:50:04.742303   47456 command_runner.go:130] > # This option is currently deprecated,
	I1011 21:50:04.742316   47456 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1011 21:50:04.742326   47456 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1011 21:50:04.742335   47456 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1011 21:50:04.742349   47456 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1011 21:50:04.742364   47456 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1011 21:50:04.742374   47456 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1011 21:50:04.742382   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.742390   47456 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1011 21:50:04.742398   47456 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1011 21:50:04.742407   47456 command_runner.go:130] > # the cgroup blockio controller.
	I1011 21:50:04.742412   47456 command_runner.go:130] > # blockio_config_file = ""
	I1011 21:50:04.742423   47456 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1011 21:50:04.742433   47456 command_runner.go:130] > # blockio parameters.
	I1011 21:50:04.742442   47456 command_runner.go:130] > # blockio_reload = false
	I1011 21:50:04.742455   47456 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1011 21:50:04.742465   47456 command_runner.go:130] > # irqbalance daemon.
	I1011 21:50:04.742474   47456 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1011 21:50:04.742485   47456 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1011 21:50:04.742498   47456 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1011 21:50:04.742507   47456 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1011 21:50:04.742523   47456 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1011 21:50:04.742537   47456 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1011 21:50:04.742546   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.742557   47456 command_runner.go:130] > # rdt_config_file = ""
	I1011 21:50:04.742569   47456 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1011 21:50:04.742580   47456 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1011 21:50:04.742600   47456 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1011 21:50:04.742609   47456 command_runner.go:130] > # separate_pull_cgroup = ""
	I1011 21:50:04.742630   47456 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1011 21:50:04.742643   47456 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1011 21:50:04.742653   47456 command_runner.go:130] > # will be added.
	I1011 21:50:04.742660   47456 command_runner.go:130] > # default_capabilities = [
	I1011 21:50:04.742669   47456 command_runner.go:130] > # 	"CHOWN",
	I1011 21:50:04.742675   47456 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1011 21:50:04.742681   47456 command_runner.go:130] > # 	"FSETID",
	I1011 21:50:04.742690   47456 command_runner.go:130] > # 	"FOWNER",
	I1011 21:50:04.742696   47456 command_runner.go:130] > # 	"SETGID",
	I1011 21:50:04.742706   47456 command_runner.go:130] > # 	"SETUID",
	I1011 21:50:04.742712   47456 command_runner.go:130] > # 	"SETPCAP",
	I1011 21:50:04.742718   47456 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1011 21:50:04.742725   47456 command_runner.go:130] > # 	"KILL",
	I1011 21:50:04.742731   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742744   47456 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1011 21:50:04.742757   47456 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1011 21:50:04.742766   47456 command_runner.go:130] > # add_inheritable_capabilities = false
	I1011 21:50:04.742778   47456 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1011 21:50:04.742790   47456 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1011 21:50:04.742799   47456 command_runner.go:130] > default_sysctls = [
	I1011 21:50:04.742807   47456 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1011 21:50:04.742814   47456 command_runner.go:130] > ]
	I1011 21:50:04.742822   47456 command_runner.go:130] > # List of devices on the host that a
	I1011 21:50:04.742834   47456 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1011 21:50:04.742843   47456 command_runner.go:130] > # allowed_devices = [
	I1011 21:50:04.742851   47456 command_runner.go:130] > # 	"/dev/fuse",
	I1011 21:50:04.742856   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742863   47456 command_runner.go:130] > # List of additional devices. specified as
	I1011 21:50:04.742878   47456 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1011 21:50:04.742888   47456 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1011 21:50:04.742898   47456 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1011 21:50:04.742907   47456 command_runner.go:130] > # additional_devices = [
	I1011 21:50:04.742912   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742920   47456 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1011 21:50:04.742935   47456 command_runner.go:130] > # cdi_spec_dirs = [
	I1011 21:50:04.742944   47456 command_runner.go:130] > # 	"/etc/cdi",
	I1011 21:50:04.742950   47456 command_runner.go:130] > # 	"/var/run/cdi",
	I1011 21:50:04.742956   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742966   47456 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1011 21:50:04.742979   47456 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1011 21:50:04.742989   47456 command_runner.go:130] > # Defaults to false.
	I1011 21:50:04.742997   47456 command_runner.go:130] > # device_ownership_from_security_context = false
	I1011 21:50:04.743010   47456 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1011 21:50:04.743022   47456 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1011 21:50:04.743031   47456 command_runner.go:130] > # hooks_dir = [
	I1011 21:50:04.743038   47456 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1011 21:50:04.743047   47456 command_runner.go:130] > # ]
	I1011 21:50:04.743059   47456 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1011 21:50:04.743073   47456 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1011 21:50:04.743084   47456 command_runner.go:130] > # its default mounts from the following two files:
	I1011 21:50:04.743089   47456 command_runner.go:130] > #
	I1011 21:50:04.743101   47456 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1011 21:50:04.743115   47456 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1011 21:50:04.743127   47456 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1011 21:50:04.743135   47456 command_runner.go:130] > #
	I1011 21:50:04.743143   47456 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1011 21:50:04.743157   47456 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1011 21:50:04.743170   47456 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1011 21:50:04.743181   47456 command_runner.go:130] > #      only add mounts it finds in this file.
	I1011 21:50:04.743186   47456 command_runner.go:130] > #
	I1011 21:50:04.743198   47456 command_runner.go:130] > # default_mounts_file = ""
	I1011 21:50:04.743209   47456 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1011 21:50:04.743220   47456 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1011 21:50:04.743224   47456 command_runner.go:130] > pids_limit = 1024
	I1011 21:50:04.743230   47456 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1011 21:50:04.743239   47456 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1011 21:50:04.743245   47456 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1011 21:50:04.743253   47456 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1011 21:50:04.743259   47456 command_runner.go:130] > # log_size_max = -1
	I1011 21:50:04.743265   47456 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1011 21:50:04.743272   47456 command_runner.go:130] > # log_to_journald = false
	I1011 21:50:04.743278   47456 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1011 21:50:04.743285   47456 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1011 21:50:04.743308   47456 command_runner.go:130] > # Path to directory for container attach sockets.
	I1011 21:50:04.743321   47456 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1011 21:50:04.743329   47456 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1011 21:50:04.743339   47456 command_runner.go:130] > # bind_mount_prefix = ""
	I1011 21:50:04.743351   47456 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1011 21:50:04.743359   47456 command_runner.go:130] > # read_only = false
	I1011 21:50:04.743369   47456 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1011 21:50:04.743383   47456 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1011 21:50:04.743394   47456 command_runner.go:130] > # live configuration reload.
	I1011 21:50:04.743403   47456 command_runner.go:130] > # log_level = "info"
	I1011 21:50:04.743412   47456 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1011 21:50:04.743423   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.743430   47456 command_runner.go:130] > # log_filter = ""
	I1011 21:50:04.743440   47456 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1011 21:50:04.743452   47456 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1011 21:50:04.743461   47456 command_runner.go:130] > # separated by comma.
	I1011 21:50:04.743473   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743482   47456 command_runner.go:130] > # uid_mappings = ""
	I1011 21:50:04.743493   47456 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1011 21:50:04.743506   47456 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1011 21:50:04.743514   47456 command_runner.go:130] > # separated by comma.
	I1011 21:50:04.743528   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743538   47456 command_runner.go:130] > # gid_mappings = ""
	I1011 21:50:04.743547   47456 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1011 21:50:04.743558   47456 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1011 21:50:04.743564   47456 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1011 21:50:04.743572   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743576   47456 command_runner.go:130] > # minimum_mappable_uid = -1
	I1011 21:50:04.743582   47456 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1011 21:50:04.743590   47456 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1011 21:50:04.743596   47456 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1011 21:50:04.743610   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743619   47456 command_runner.go:130] > # minimum_mappable_gid = -1
	I1011 21:50:04.743628   47456 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1011 21:50:04.743641   47456 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1011 21:50:04.743650   47456 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1011 21:50:04.743663   47456 command_runner.go:130] > # ctr_stop_timeout = 30
	I1011 21:50:04.743676   47456 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1011 21:50:04.743687   47456 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1011 21:50:04.743697   47456 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1011 21:50:04.743707   47456 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1011 21:50:04.743716   47456 command_runner.go:130] > drop_infra_ctr = false
	I1011 21:50:04.743726   47456 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1011 21:50:04.743737   47456 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1011 21:50:04.743749   47456 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1011 21:50:04.743758   47456 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1011 21:50:04.743768   47456 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1011 21:50:04.743776   47456 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1011 21:50:04.743784   47456 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1011 21:50:04.743795   47456 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1011 21:50:04.743804   47456 command_runner.go:130] > # shared_cpuset = ""
	I1011 21:50:04.743816   47456 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1011 21:50:04.743824   47456 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1011 21:50:04.743835   47456 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1011 21:50:04.743846   47456 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1011 21:50:04.743856   47456 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1011 21:50:04.743864   47456 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1011 21:50:04.743877   47456 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1011 21:50:04.743886   47456 command_runner.go:130] > # enable_criu_support = false
	I1011 21:50:04.743894   47456 command_runner.go:130] > # Enable/disable the generation of the container,
	I1011 21:50:04.743906   47456 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1011 21:50:04.743916   47456 command_runner.go:130] > # enable_pod_events = false
	I1011 21:50:04.743927   47456 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1011 21:50:04.743940   47456 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1011 21:50:04.743951   47456 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1011 21:50:04.743960   47456 command_runner.go:130] > # default_runtime = "runc"
	I1011 21:50:04.743969   47456 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1011 21:50:04.743985   47456 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1011 21:50:04.743999   47456 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1011 21:50:04.744009   47456 command_runner.go:130] > # creation as a file is not desired either.
	I1011 21:50:04.744024   47456 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1011 21:50:04.744039   47456 command_runner.go:130] > # the hostname is being managed dynamically.
	I1011 21:50:04.744049   47456 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1011 21:50:04.744056   47456 command_runner.go:130] > # ]
	I1011 21:50:04.744069   47456 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1011 21:50:04.744081   47456 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1011 21:50:04.744088   47456 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1011 21:50:04.744099   47456 command_runner.go:130] > # Each entry in the table should follow the format:
	I1011 21:50:04.744107   47456 command_runner.go:130] > #
	I1011 21:50:04.744114   47456 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1011 21:50:04.744125   47456 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1011 21:50:04.744151   47456 command_runner.go:130] > # runtime_type = "oci"
	I1011 21:50:04.744164   47456 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1011 21:50:04.744179   47456 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1011 21:50:04.744189   47456 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1011 21:50:04.744201   47456 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1011 21:50:04.744210   47456 command_runner.go:130] > # monitor_env = []
	I1011 21:50:04.744217   47456 command_runner.go:130] > # privileged_without_host_devices = false
	I1011 21:50:04.744223   47456 command_runner.go:130] > # allowed_annotations = []
	I1011 21:50:04.744230   47456 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1011 21:50:04.744233   47456 command_runner.go:130] > # Where:
	I1011 21:50:04.744239   47456 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1011 21:50:04.744246   47456 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1011 21:50:04.744252   47456 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1011 21:50:04.744261   47456 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1011 21:50:04.744265   47456 command_runner.go:130] > #   in $PATH.
	I1011 21:50:04.744272   47456 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1011 21:50:04.744277   47456 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1011 21:50:04.744283   47456 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1011 21:50:04.744288   47456 command_runner.go:130] > #   state.
	I1011 21:50:04.744294   47456 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1011 21:50:04.744301   47456 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1011 21:50:04.744307   47456 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1011 21:50:04.744315   47456 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1011 21:50:04.744320   47456 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1011 21:50:04.744329   47456 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1011 21:50:04.744337   47456 command_runner.go:130] > #   The currently recognized values are:
	I1011 21:50:04.744342   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1011 21:50:04.744351   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1011 21:50:04.744361   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1011 21:50:04.744369   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1011 21:50:04.744376   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1011 21:50:04.744399   47456 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1011 21:50:04.744411   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1011 21:50:04.744419   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1011 21:50:04.744425   47456 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1011 21:50:04.744433   47456 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1011 21:50:04.744437   47456 command_runner.go:130] > #   deprecated option "conmon".
	I1011 21:50:04.744444   47456 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1011 21:50:04.744452   47456 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1011 21:50:04.744459   47456 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1011 21:50:04.744473   47456 command_runner.go:130] > #   should be moved to the container's cgroup
	I1011 21:50:04.744483   47456 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1011 21:50:04.744491   47456 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1011 21:50:04.744497   47456 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1011 21:50:04.744504   47456 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1011 21:50:04.744507   47456 command_runner.go:130] > #
	I1011 21:50:04.744512   47456 command_runner.go:130] > # Using the seccomp notifier feature:
	I1011 21:50:04.744517   47456 command_runner.go:130] > #
	I1011 21:50:04.744523   47456 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1011 21:50:04.744531   47456 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1011 21:50:04.744536   47456 command_runner.go:130] > #
	I1011 21:50:04.744542   47456 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1011 21:50:04.744551   47456 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1011 21:50:04.744556   47456 command_runner.go:130] > #
	I1011 21:50:04.744564   47456 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1011 21:50:04.744569   47456 command_runner.go:130] > # feature.
	I1011 21:50:04.744573   47456 command_runner.go:130] > #
	I1011 21:50:04.744580   47456 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1011 21:50:04.744590   47456 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1011 21:50:04.744596   47456 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1011 21:50:04.744604   47456 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1011 21:50:04.744611   47456 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1011 21:50:04.744615   47456 command_runner.go:130] > #
	I1011 21:50:04.744621   47456 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1011 21:50:04.744631   47456 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1011 21:50:04.744637   47456 command_runner.go:130] > #
	I1011 21:50:04.744643   47456 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1011 21:50:04.744650   47456 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1011 21:50:04.744653   47456 command_runner.go:130] > #
	I1011 21:50:04.744658   47456 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1011 21:50:04.744666   47456 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1011 21:50:04.744672   47456 command_runner.go:130] > # limitation.
	I1011 21:50:04.744677   47456 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1011 21:50:04.744681   47456 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1011 21:50:04.744688   47456 command_runner.go:130] > runtime_type = "oci"
	I1011 21:50:04.744692   47456 command_runner.go:130] > runtime_root = "/run/runc"
	I1011 21:50:04.744698   47456 command_runner.go:130] > runtime_config_path = ""
	I1011 21:50:04.744703   47456 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1011 21:50:04.744709   47456 command_runner.go:130] > monitor_cgroup = "pod"
	I1011 21:50:04.744713   47456 command_runner.go:130] > monitor_exec_cgroup = ""
	I1011 21:50:04.744719   47456 command_runner.go:130] > monitor_env = [
	I1011 21:50:04.744725   47456 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1011 21:50:04.744730   47456 command_runner.go:130] > ]
	I1011 21:50:04.744734   47456 command_runner.go:130] > privileged_without_host_devices = false
	I1011 21:50:04.744743   47456 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1011 21:50:04.744750   47456 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1011 21:50:04.744756   47456 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1011 21:50:04.744765   47456 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1011 21:50:04.744772   47456 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1011 21:50:04.744781   47456 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1011 21:50:04.744791   47456 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1011 21:50:04.744802   47456 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1011 21:50:04.744809   47456 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1011 21:50:04.744817   47456 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1011 21:50:04.744823   47456 command_runner.go:130] > # Example:
	I1011 21:50:04.744828   47456 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1011 21:50:04.744832   47456 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1011 21:50:04.744836   47456 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1011 21:50:04.744841   47456 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1011 21:50:04.744845   47456 command_runner.go:130] > # cpuset = 0
	I1011 21:50:04.744848   47456 command_runner.go:130] > # cpushares = "0-1"
	I1011 21:50:04.744852   47456 command_runner.go:130] > # Where:
	I1011 21:50:04.744859   47456 command_runner.go:130] > # The workload name is workload-type.
	I1011 21:50:04.744865   47456 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1011 21:50:04.744870   47456 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1011 21:50:04.744875   47456 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1011 21:50:04.744882   47456 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1011 21:50:04.744893   47456 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1011 21:50:04.744898   47456 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1011 21:50:04.744904   47456 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1011 21:50:04.744908   47456 command_runner.go:130] > # Default value is set to true
	I1011 21:50:04.744912   47456 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1011 21:50:04.744918   47456 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1011 21:50:04.744922   47456 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1011 21:50:04.744926   47456 command_runner.go:130] > # Default value is set to 'false'
	I1011 21:50:04.744930   47456 command_runner.go:130] > # disable_hostport_mapping = false
	I1011 21:50:04.744936   47456 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1011 21:50:04.744941   47456 command_runner.go:130] > #
	I1011 21:50:04.744947   47456 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1011 21:50:04.744954   47456 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1011 21:50:04.744960   47456 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1011 21:50:04.744968   47456 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1011 21:50:04.744973   47456 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1011 21:50:04.744979   47456 command_runner.go:130] > [crio.image]
	I1011 21:50:04.744986   47456 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1011 21:50:04.744992   47456 command_runner.go:130] > # default_transport = "docker://"
	I1011 21:50:04.744998   47456 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1011 21:50:04.745004   47456 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1011 21:50:04.745009   47456 command_runner.go:130] > # global_auth_file = ""
	I1011 21:50:04.745015   47456 command_runner.go:130] > # The image used to instantiate infra containers.
	I1011 21:50:04.745020   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.745029   47456 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1011 21:50:04.745035   47456 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1011 21:50:04.745042   47456 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1011 21:50:04.745047   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.745054   47456 command_runner.go:130] > # pause_image_auth_file = ""
	I1011 21:50:04.745059   47456 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1011 21:50:04.745067   47456 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1011 21:50:04.745075   47456 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1011 21:50:04.745083   47456 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1011 21:50:04.745087   47456 command_runner.go:130] > # pause_command = "/pause"
	I1011 21:50:04.745095   47456 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1011 21:50:04.745101   47456 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1011 21:50:04.745108   47456 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1011 21:50:04.745116   47456 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1011 21:50:04.745121   47456 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1011 21:50:04.745129   47456 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1011 21:50:04.745134   47456 command_runner.go:130] > # pinned_images = [
	I1011 21:50:04.745137   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745145   47456 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1011 21:50:04.745151   47456 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1011 21:50:04.745157   47456 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1011 21:50:04.745164   47456 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1011 21:50:04.745169   47456 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1011 21:50:04.745175   47456 command_runner.go:130] > # signature_policy = ""
	I1011 21:50:04.745181   47456 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1011 21:50:04.745190   47456 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1011 21:50:04.745200   47456 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1011 21:50:04.745209   47456 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1011 21:50:04.745216   47456 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1011 21:50:04.745220   47456 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1011 21:50:04.745228   47456 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1011 21:50:04.745234   47456 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1011 21:50:04.745238   47456 command_runner.go:130] > # changing them here.
	I1011 21:50:04.745244   47456 command_runner.go:130] > # insecure_registries = [
	I1011 21:50:04.745247   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745253   47456 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1011 21:50:04.745263   47456 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1011 21:50:04.745267   47456 command_runner.go:130] > # image_volumes = "mkdir"
	I1011 21:50:04.745272   47456 command_runner.go:130] > # Temporary directory to use for storing big files
	I1011 21:50:04.745278   47456 command_runner.go:130] > # big_files_temporary_dir = ""
	I1011 21:50:04.745284   47456 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1011 21:50:04.745290   47456 command_runner.go:130] > # CNI plugins.
	I1011 21:50:04.745293   47456 command_runner.go:130] > [crio.network]
	I1011 21:50:04.745301   47456 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1011 21:50:04.745309   47456 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1011 21:50:04.745316   47456 command_runner.go:130] > # cni_default_network = ""
	I1011 21:50:04.745321   47456 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1011 21:50:04.745327   47456 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1011 21:50:04.745332   47456 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1011 21:50:04.745338   47456 command_runner.go:130] > # plugin_dirs = [
	I1011 21:50:04.745342   47456 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1011 21:50:04.745347   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745353   47456 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1011 21:50:04.745359   47456 command_runner.go:130] > [crio.metrics]
	I1011 21:50:04.745364   47456 command_runner.go:130] > # Globally enable or disable metrics support.
	I1011 21:50:04.745370   47456 command_runner.go:130] > enable_metrics = true
	I1011 21:50:04.745374   47456 command_runner.go:130] > # Specify enabled metrics collectors.
	I1011 21:50:04.745381   47456 command_runner.go:130] > # Per default all metrics are enabled.
	I1011 21:50:04.745387   47456 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1011 21:50:04.745396   47456 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1011 21:50:04.745403   47456 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1011 21:50:04.745406   47456 command_runner.go:130] > # metrics_collectors = [
	I1011 21:50:04.745410   47456 command_runner.go:130] > # 	"operations",
	I1011 21:50:04.745417   47456 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1011 21:50:04.745421   47456 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1011 21:50:04.745428   47456 command_runner.go:130] > # 	"operations_errors",
	I1011 21:50:04.745432   47456 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1011 21:50:04.745438   47456 command_runner.go:130] > # 	"image_pulls_by_name",
	I1011 21:50:04.745442   47456 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1011 21:50:04.745448   47456 command_runner.go:130] > # 	"image_pulls_failures",
	I1011 21:50:04.745455   47456 command_runner.go:130] > # 	"image_pulls_successes",
	I1011 21:50:04.745459   47456 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1011 21:50:04.745465   47456 command_runner.go:130] > # 	"image_layer_reuse",
	I1011 21:50:04.745470   47456 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1011 21:50:04.745476   47456 command_runner.go:130] > # 	"containers_oom_total",
	I1011 21:50:04.745480   47456 command_runner.go:130] > # 	"containers_oom",
	I1011 21:50:04.745493   47456 command_runner.go:130] > # 	"processes_defunct",
	I1011 21:50:04.745501   47456 command_runner.go:130] > # 	"operations_total",
	I1011 21:50:04.745505   47456 command_runner.go:130] > # 	"operations_latency_seconds",
	I1011 21:50:04.745512   47456 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1011 21:50:04.745516   47456 command_runner.go:130] > # 	"operations_errors_total",
	I1011 21:50:04.745523   47456 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1011 21:50:04.745527   47456 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1011 21:50:04.745533   47456 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1011 21:50:04.745537   47456 command_runner.go:130] > # 	"image_pulls_success_total",
	I1011 21:50:04.745544   47456 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1011 21:50:04.745548   47456 command_runner.go:130] > # 	"containers_oom_count_total",
	I1011 21:50:04.745555   47456 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1011 21:50:04.745559   47456 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1011 21:50:04.745567   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745575   47456 command_runner.go:130] > # The port on which the metrics server will listen.
	I1011 21:50:04.745579   47456 command_runner.go:130] > # metrics_port = 9090
	I1011 21:50:04.745587   47456 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1011 21:50:04.745591   47456 command_runner.go:130] > # metrics_socket = ""
	I1011 21:50:04.745597   47456 command_runner.go:130] > # The certificate for the secure metrics server.
	I1011 21:50:04.745604   47456 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1011 21:50:04.745612   47456 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1011 21:50:04.745617   47456 command_runner.go:130] > # certificate on any modification event.
	I1011 21:50:04.745622   47456 command_runner.go:130] > # metrics_cert = ""
	I1011 21:50:04.745628   47456 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1011 21:50:04.745635   47456 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1011 21:50:04.745639   47456 command_runner.go:130] > # metrics_key = ""
	I1011 21:50:04.745644   47456 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1011 21:50:04.745650   47456 command_runner.go:130] > [crio.tracing]
	I1011 21:50:04.745655   47456 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1011 21:50:04.745661   47456 command_runner.go:130] > # enable_tracing = false
	I1011 21:50:04.745666   47456 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1011 21:50:04.745673   47456 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1011 21:50:04.745679   47456 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1011 21:50:04.745686   47456 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1011 21:50:04.745692   47456 command_runner.go:130] > # CRI-O NRI configuration.
	I1011 21:50:04.745699   47456 command_runner.go:130] > [crio.nri]
	I1011 21:50:04.745705   47456 command_runner.go:130] > # Globally enable or disable NRI.
	I1011 21:50:04.745713   47456 command_runner.go:130] > # enable_nri = false
	I1011 21:50:04.745723   47456 command_runner.go:130] > # NRI socket to listen on.
	I1011 21:50:04.745733   47456 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1011 21:50:04.745743   47456 command_runner.go:130] > # NRI plugin directory to use.
	I1011 21:50:04.745750   47456 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1011 21:50:04.745755   47456 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1011 21:50:04.745761   47456 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1011 21:50:04.745766   47456 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1011 21:50:04.745773   47456 command_runner.go:130] > # nri_disable_connections = false
	I1011 21:50:04.745781   47456 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1011 21:50:04.745789   47456 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1011 21:50:04.745794   47456 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1011 21:50:04.745806   47456 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1011 21:50:04.745816   47456 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1011 21:50:04.745825   47456 command_runner.go:130] > [crio.stats]
	I1011 21:50:04.745836   47456 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1011 21:50:04.745847   47456 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1011 21:50:04.745857   47456 command_runner.go:130] > # stats_collection_period = 0
	I1011 21:50:04.745884   47456 command_runner.go:130] ! time="2024-10-11 21:50:04.707810385Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1011 21:50:04.745910   47456 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1011 21:50:04.745970   47456 cni.go:84] Creating CNI manager for ""
	I1011 21:50:04.745983   47456 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1011 21:50:04.745991   47456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:50:04.746012   47456 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-805849 NodeName:multinode-805849 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:50:04.746142   47456 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-805849"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:50:04.746205   47456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:50:04.757014   47456 command_runner.go:130] > kubeadm
	I1011 21:50:04.757035   47456 command_runner.go:130] > kubectl
	I1011 21:50:04.757039   47456 command_runner.go:130] > kubelet
	I1011 21:50:04.757176   47456 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:50:04.757230   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 21:50:04.767311   47456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1011 21:50:04.787913   47456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:50:04.805818   47456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1011 21:50:04.823967   47456 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I1011 21:50:04.828363   47456 command_runner.go:130] > 192.168.39.81	control-plane.minikube.internal
	I1011 21:50:04.828451   47456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:50:04.967646   47456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:50:04.982952   47456 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849 for IP: 192.168.39.81
	I1011 21:50:04.982972   47456 certs.go:194] generating shared ca certs ...
	I1011 21:50:04.983002   47456 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:50:04.983170   47456 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:50:04.983208   47456 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:50:04.983217   47456 certs.go:256] generating profile certs ...
	I1011 21:50:04.983290   47456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/client.key
	I1011 21:50:04.983353   47456 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.key.9f23dda3
	I1011 21:50:04.983387   47456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.key
	I1011 21:50:04.983398   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:50:04.983411   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:50:04.983423   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:50:04.983435   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:50:04.983446   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:50:04.983457   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:50:04.983469   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:50:04.983482   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:50:04.983549   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:50:04.983580   47456 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:50:04.983591   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:50:04.983613   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:50:04.983635   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:50:04.983657   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:50:04.983696   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:50:04.983725   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:50:04.983740   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:50:04.983754   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:04.984325   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:50:05.012149   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:50:05.039429   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:50:05.066494   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:50:05.094714   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 21:50:05.122683   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:50:05.150244   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:50:05.178423   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 21:50:05.205223   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:50:05.231808   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:50:05.258475   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:50:05.285274   47456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:50:05.303144   47456 ssh_runner.go:195] Run: openssl version
	I1011 21:50:05.309635   47456 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1011 21:50:05.309706   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:50:05.322175   47456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.327057   47456 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.327090   47456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.327137   47456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.333045   47456 command_runner.go:130] > 51391683
	I1011 21:50:05.333119   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:50:05.343642   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:50:05.355516   47456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.360270   47456 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.360303   47456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.360341   47456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.367234   47456 command_runner.go:130] > 3ec20f2e
	I1011 21:50:05.367303   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:50:05.378375   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:50:05.390060   47456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.394761   47456 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.394795   47456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.394884   47456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.400863   47456 command_runner.go:130] > b5213941
	I1011 21:50:05.400935   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:50:05.411520   47456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:50:05.416166   47456 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:50:05.416193   47456 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1011 21:50:05.416201   47456 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I1011 21:50:05.416210   47456 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1011 21:50:05.416219   47456 command_runner.go:130] > Access: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416225   47456 command_runner.go:130] > Modify: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416232   47456 command_runner.go:130] > Change: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416241   47456 command_runner.go:130] >  Birth: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416304   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 21:50:05.422448   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.422523   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 21:50:05.428573   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.428679   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 21:50:05.434925   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.435007   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 21:50:05.441318   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.441392   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 21:50:05.448144   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.448350   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 21:50:05.454536   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.454604   47456 kubeadm.go:392] StartCluster: {Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:50:05.454748   47456 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:50:05.454807   47456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:50:05.499946   47456 command_runner.go:130] > 95f70c75ea04f7589296a1c74af42977248076a1620b4ffce857606f4db48bd1
	I1011 21:50:05.499978   47456 command_runner.go:130] > a3ad8aa85c33e81e0a18337f1f571cd4b9f5fac4c3cd1649e464f81dbce15f22
	I1011 21:50:05.499986   47456 command_runner.go:130] > d9c5d9bef725aecac0711fb53c13ab9e41bd59afed1d16e72b20921a5fe48a35
	I1011 21:50:05.499995   47456 command_runner.go:130] > c8afbfb4ddae8d530502fba1ab7981ad2ff910a55a88375f34dba4a8f128bd75
	I1011 21:50:05.500002   47456 command_runner.go:130] > 07aaa90bbf4d1334f0e1cf2b47af81e11bb502f70b04f1d0f7cb3cbb9b8ad1e2
	I1011 21:50:05.500009   47456 command_runner.go:130] > cf4c036abc4a58d75b93613c36bf0387ec672f5134c1eb86fbbf37d0cf82de04
	I1011 21:50:05.500016   47456 command_runner.go:130] > bcf0281c7e55c9969ad85223b8ef6f7ea01338f3df18d2724c78ff1a23df04a2
	I1011 21:50:05.500025   47456 command_runner.go:130] > 8d0b183bb85d1b21849642f36896ea90b243d3938f86fa02c9c561696703abb5
	I1011 21:50:05.500047   47456 cri.go:89] found id: "95f70c75ea04f7589296a1c74af42977248076a1620b4ffce857606f4db48bd1"
	I1011 21:50:05.500058   47456 cri.go:89] found id: "a3ad8aa85c33e81e0a18337f1f571cd4b9f5fac4c3cd1649e464f81dbce15f22"
	I1011 21:50:05.500062   47456 cri.go:89] found id: "d9c5d9bef725aecac0711fb53c13ab9e41bd59afed1d16e72b20921a5fe48a35"
	I1011 21:50:05.500075   47456 cri.go:89] found id: "c8afbfb4ddae8d530502fba1ab7981ad2ff910a55a88375f34dba4a8f128bd75"
	I1011 21:50:05.500080   47456 cri.go:89] found id: "07aaa90bbf4d1334f0e1cf2b47af81e11bb502f70b04f1d0f7cb3cbb9b8ad1e2"
	I1011 21:50:05.500092   47456 cri.go:89] found id: "cf4c036abc4a58d75b93613c36bf0387ec672f5134c1eb86fbbf37d0cf82de04"
	I1011 21:50:05.500099   47456 cri.go:89] found id: "bcf0281c7e55c9969ad85223b8ef6f7ea01338f3df18d2724c78ff1a23df04a2"
	I1011 21:50:05.500106   47456 cri.go:89] found id: "8d0b183bb85d1b21849642f36896ea90b243d3938f86fa02c9c561696703abb5"
	I1011 21:50:05.500111   47456 cri.go:89] found id: ""
	I1011 21:50:05.500165   47456 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-805849 -n multinode-805849
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-805849 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 stop
E1011 21:52:06.382524   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:53:27.557093   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-805849 stop: exit status 82 (2m0.455633781s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-805849-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-805849 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 status: (18.636707414s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr: (3.359716472s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-805849 -n multinode-805849
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 logs -n 25: (2.023477864s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849:/home/docker/cp-test_multinode-805849-m02_multinode-805849.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849 sudo cat                                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m02_multinode-805849.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03:/home/docker/cp-test_multinode-805849-m02_multinode-805849-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849-m03 sudo cat                                   | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m02_multinode-805849-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp testdata/cp-test.txt                                                | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3100717761/001/cp-test_multinode-805849-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849:/home/docker/cp-test_multinode-805849-m03_multinode-805849.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849 sudo cat                                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m03_multinode-805849.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02:/home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849-m02 sudo cat                                   | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-805849 node stop m03                                                          | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	| node    | multinode-805849 node start                                                             | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:46 UTC |                     |
	| stop    | -p multinode-805849                                                                     | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:46 UTC |                     |
	| start   | -p multinode-805849                                                                     | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC |                     |
	| node    | multinode-805849 node delete                                                            | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC | 11 Oct 24 21:51 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-805849 stop                                                                   | multinode-805849 | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:48:30.892131   47456 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:48:30.892251   47456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:48:30.892260   47456 out.go:358] Setting ErrFile to fd 2...
	I1011 21:48:30.892265   47456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:48:30.892444   47456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:48:30.892943   47456 out.go:352] Setting JSON to false
	I1011 21:48:30.893784   47456 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5456,"bootTime":1728677855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:48:30.893877   47456 start.go:139] virtualization: kvm guest
	I1011 21:48:30.896118   47456 out.go:177] * [multinode-805849] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:48:30.897440   47456 notify.go:220] Checking for updates...
	I1011 21:48:30.897445   47456 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:48:30.899321   47456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:48:30.901232   47456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:48:30.902579   47456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:48:30.903605   47456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:48:30.904778   47456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:48:30.906672   47456 config.go:182] Loaded profile config "multinode-805849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:48:30.906796   47456 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:48:30.907464   47456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:48:30.907528   47456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:48:30.922938   47456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I1011 21:48:30.923467   47456 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:48:30.923953   47456 main.go:141] libmachine: Using API Version  1
	I1011 21:48:30.923971   47456 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:48:30.924274   47456 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:48:30.924462   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:48:30.959590   47456 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 21:48:30.960927   47456 start.go:297] selected driver: kvm2
	I1011 21:48:30.960938   47456 start.go:901] validating driver "kvm2" against &{Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:48:30.961075   47456 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:48:30.961419   47456 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:48:30.961493   47456 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:48:30.975675   47456 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:48:30.976300   47456 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:48:30.976333   47456 cni.go:84] Creating CNI manager for ""
	I1011 21:48:30.976383   47456 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1011 21:48:30.976437   47456 start.go:340] cluster config:
	{Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-805849 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:48:30.976568   47456 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:48:30.979087   47456 out.go:177] * Starting "multinode-805849" primary control-plane node in "multinode-805849" cluster
	I1011 21:48:30.980221   47456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:48:30.980260   47456 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 21:48:30.980272   47456 cache.go:56] Caching tarball of preloaded images
	I1011 21:48:30.980363   47456 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 21:48:30.980375   47456 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 21:48:30.980515   47456 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/config.json ...
	I1011 21:48:30.980725   47456 start.go:360] acquireMachinesLock for multinode-805849: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 21:48:30.980780   47456 start.go:364] duration metric: took 35.125µs to acquireMachinesLock for "multinode-805849"
	I1011 21:48:30.980799   47456 start.go:96] Skipping create...Using existing machine configuration
	I1011 21:48:30.980809   47456 fix.go:54] fixHost starting: 
	I1011 21:48:30.981093   47456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:48:30.981132   47456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:48:30.995898   47456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1011 21:48:30.996358   47456 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:48:30.996814   47456 main.go:141] libmachine: Using API Version  1
	I1011 21:48:30.996834   47456 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:48:30.997112   47456 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:48:30.997258   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:48:30.997394   47456 main.go:141] libmachine: (multinode-805849) Calling .GetState
	I1011 21:48:30.998824   47456 fix.go:112] recreateIfNeeded on multinode-805849: state=Running err=<nil>
	W1011 21:48:30.998841   47456 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 21:48:31.000550   47456 out.go:177] * Updating the running kvm2 "multinode-805849" VM ...
	I1011 21:48:31.001739   47456 machine.go:93] provisionDockerMachine start ...
	I1011 21:48:31.001756   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:48:31.001927   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.004023   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.004424   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.004446   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.004612   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.004759   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.004902   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.005029   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.005196   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.005377   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.005387   47456 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 21:48:31.119569   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-805849
	
	I1011 21:48:31.119604   47456 main.go:141] libmachine: (multinode-805849) Calling .GetMachineName
	I1011 21:48:31.119821   47456 buildroot.go:166] provisioning hostname "multinode-805849"
	I1011 21:48:31.119844   47456 main.go:141] libmachine: (multinode-805849) Calling .GetMachineName
	I1011 21:48:31.120047   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.122547   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.122932   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.122969   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.123061   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.123213   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.123373   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.123457   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.123598   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.123745   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.123756   47456 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-805849 && echo "multinode-805849" | sudo tee /etc/hostname
	I1011 21:48:31.251593   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-805849
	
	I1011 21:48:31.251626   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.254344   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.254747   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.254779   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.254942   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.255115   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.255242   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.255373   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.255497   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.255667   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.255689   47456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-805849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-805849/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-805849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:48:31.367503   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:48:31.367537   47456 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 21:48:31.367557   47456 buildroot.go:174] setting up certificates
	I1011 21:48:31.367565   47456 provision.go:84] configureAuth start
	I1011 21:48:31.367574   47456 main.go:141] libmachine: (multinode-805849) Calling .GetMachineName
	I1011 21:48:31.367868   47456 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:48:31.370590   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.370988   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.371005   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.371178   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.373219   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.373562   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.373600   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.373768   47456 provision.go:143] copyHostCerts
	I1011 21:48:31.373807   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:48:31.373838   47456 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 21:48:31.373848   47456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 21:48:31.373937   47456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 21:48:31.374035   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:48:31.374058   47456 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 21:48:31.374066   47456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 21:48:31.374096   47456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 21:48:31.374157   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:48:31.374176   47456 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 21:48:31.374183   47456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 21:48:31.374209   47456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 21:48:31.374260   47456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.multinode-805849 san=[127.0.0.1 192.168.39.81 localhost minikube multinode-805849]
	I1011 21:48:31.616047   47456 provision.go:177] copyRemoteCerts
	I1011 21:48:31.616106   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:48:31.616131   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.618721   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.619059   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.619081   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.619354   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.619548   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.619705   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.619821   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:48:31.708292   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1011 21:48:31.708361   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 21:48:31.738460   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1011 21:48:31.738532   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1011 21:48:31.762620   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1011 21:48:31.762695   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:48:31.788411   47456 provision.go:87] duration metric: took 420.835056ms to configureAuth
	I1011 21:48:31.788436   47456 buildroot.go:189] setting minikube options for container-runtime
	I1011 21:48:31.788648   47456 config.go:182] Loaded profile config "multinode-805849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:48:31.788727   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:48:31.791822   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.792319   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:48:31.792344   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:48:31.792520   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:48:31.792667   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.792861   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:48:31.793015   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:48:31.793169   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:48:31.793325   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:48:31.793338   47456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 21:50:02.701475   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 21:50:02.701513   47456 machine.go:96] duration metric: took 1m31.699762359s to provisionDockerMachine
	I1011 21:50:02.701543   47456 start.go:293] postStartSetup for "multinode-805849" (driver="kvm2")
	I1011 21:50:02.701567   47456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:50:02.701600   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.701974   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:50:02.702012   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.705603   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.706053   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.706085   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.706259   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.706469   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.706607   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.706764   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:50:02.794924   47456 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:50:02.799421   47456 command_runner.go:130] > NAME=Buildroot
	I1011 21:50:02.799443   47456 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1011 21:50:02.799448   47456 command_runner.go:130] > ID=buildroot
	I1011 21:50:02.799453   47456 command_runner.go:130] > VERSION_ID=2023.02.9
	I1011 21:50:02.799459   47456 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1011 21:50:02.799503   47456 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 21:50:02.799524   47456 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 21:50:02.799614   47456 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 21:50:02.799725   47456 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 21:50:02.799738   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /etc/ssl/certs/188142.pem
	I1011 21:50:02.799844   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:50:02.809979   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:50:02.836413   47456 start.go:296] duration metric: took 134.853741ms for postStartSetup
	I1011 21:50:02.836491   47456 fix.go:56] duration metric: took 1m31.855682436s for fixHost
	I1011 21:50:02.836526   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.839362   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.839705   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.839732   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.839913   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.840098   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.840241   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.840415   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.840610   47456 main.go:141] libmachine: Using SSH client type: native
	I1011 21:50:02.840809   47456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1011 21:50:02.840821   47456 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 21:50:02.951896   47456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728683402.926758339
	
	I1011 21:50:02.951917   47456 fix.go:216] guest clock: 1728683402.926758339
	I1011 21:50:02.951923   47456 fix.go:229] Guest: 2024-10-11 21:50:02.926758339 +0000 UTC Remote: 2024-10-11 21:50:02.836496536 +0000 UTC m=+91.982626614 (delta=90.261803ms)
	I1011 21:50:02.951980   47456 fix.go:200] guest clock delta is within tolerance: 90.261803ms
	I1011 21:50:02.951988   47456 start.go:83] releasing machines lock for "multinode-805849", held for 1m31.971196448s
	I1011 21:50:02.952025   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.952302   47456 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:50:02.955292   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.955631   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.955654   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.955879   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.956432   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.956624   47456 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:50:02.956722   47456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:50:02.956780   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.956841   47456 ssh_runner.go:195] Run: cat /version.json
	I1011 21:50:02.956863   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:50:02.959832   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960009   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960207   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.960237   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960405   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.960554   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:02.960577   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.960579   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:02.960685   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:50:02.960757   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.960842   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:50:02.960892   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:50:02.960961   47456 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:50:02.961086   47456 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:50:03.043861   47456 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1011 21:50:03.062991   47456 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1011 21:50:03.063850   47456 ssh_runner.go:195] Run: systemctl --version
	I1011 21:50:03.070388   47456 command_runner.go:130] > systemd 252 (252)
	I1011 21:50:03.070425   47456 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1011 21:50:03.070609   47456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 21:50:03.234510   47456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 21:50:03.243291   47456 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1011 21:50:03.243678   47456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 21:50:03.243745   47456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:50:03.254489   47456 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1011 21:50:03.254516   47456 start.go:495] detecting cgroup driver to use...
	I1011 21:50:03.254587   47456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 21:50:03.272170   47456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 21:50:03.287786   47456 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:50:03.287850   47456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:50:03.302993   47456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:50:03.317766   47456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:50:03.467526   47456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:50:03.618083   47456 docker.go:233] disabling docker service ...
	I1011 21:50:03.618154   47456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:50:03.637059   47456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:50:03.651919   47456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:50:03.814765   47456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:50:03.970836   47456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:50:03.985685   47456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:50:04.006288   47456 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1011 21:50:04.006336   47456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 21:50:04.006392   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.018003   47456 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 21:50:04.018062   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.029205   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.040152   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.051395   47456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:50:04.062857   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.075054   47456 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.087655   47456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 21:50:04.098878   47456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:50:04.109155   47456 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1011 21:50:04.109234   47456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:50:04.119456   47456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:50:04.258845   47456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 21:50:04.482203   47456 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 21:50:04.482285   47456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 21:50:04.487700   47456 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1011 21:50:04.487730   47456 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1011 21:50:04.487739   47456 command_runner.go:130] > Device: 0,22	Inode: 1299        Links: 1
	I1011 21:50:04.487749   47456 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1011 21:50:04.487757   47456 command_runner.go:130] > Access: 2024-10-11 21:50:04.332107543 +0000
	I1011 21:50:04.487766   47456 command_runner.go:130] > Modify: 2024-10-11 21:50:04.332107543 +0000
	I1011 21:50:04.487772   47456 command_runner.go:130] > Change: 2024-10-11 21:50:04.332107543 +0000
	I1011 21:50:04.487778   47456 command_runner.go:130] >  Birth: -
	I1011 21:50:04.487799   47456 start.go:563] Will wait 60s for crictl version
	I1011 21:50:04.487870   47456 ssh_runner.go:195] Run: which crictl
	I1011 21:50:04.492207   47456 command_runner.go:130] > /usr/bin/crictl
	I1011 21:50:04.492297   47456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:50:04.529861   47456 command_runner.go:130] > Version:  0.1.0
	I1011 21:50:04.529896   47456 command_runner.go:130] > RuntimeName:  cri-o
	I1011 21:50:04.529905   47456 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1011 21:50:04.529912   47456 command_runner.go:130] > RuntimeApiVersion:  v1
	I1011 21:50:04.529931   47456 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 21:50:04.529993   47456 ssh_runner.go:195] Run: crio --version
	I1011 21:50:04.559696   47456 command_runner.go:130] > crio version 1.29.1
	I1011 21:50:04.559721   47456 command_runner.go:130] > Version:        1.29.1
	I1011 21:50:04.559731   47456 command_runner.go:130] > GitCommit:      unknown
	I1011 21:50:04.559738   47456 command_runner.go:130] > GitCommitDate:  unknown
	I1011 21:50:04.559746   47456 command_runner.go:130] > GitTreeState:   clean
	I1011 21:50:04.559752   47456 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1011 21:50:04.559756   47456 command_runner.go:130] > GoVersion:      go1.21.6
	I1011 21:50:04.559760   47456 command_runner.go:130] > Compiler:       gc
	I1011 21:50:04.559765   47456 command_runner.go:130] > Platform:       linux/amd64
	I1011 21:50:04.559769   47456 command_runner.go:130] > Linkmode:       dynamic
	I1011 21:50:04.559773   47456 command_runner.go:130] > BuildTags:      
	I1011 21:50:04.559777   47456 command_runner.go:130] >   containers_image_ostree_stub
	I1011 21:50:04.559782   47456 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1011 21:50:04.559789   47456 command_runner.go:130] >   btrfs_noversion
	I1011 21:50:04.559797   47456 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1011 21:50:04.559803   47456 command_runner.go:130] >   libdm_no_deferred_remove
	I1011 21:50:04.559813   47456 command_runner.go:130] >   seccomp
	I1011 21:50:04.559820   47456 command_runner.go:130] > LDFlags:          unknown
	I1011 21:50:04.559827   47456 command_runner.go:130] > SeccompEnabled:   true
	I1011 21:50:04.559836   47456 command_runner.go:130] > AppArmorEnabled:  false
	I1011 21:50:04.561302   47456 ssh_runner.go:195] Run: crio --version
	I1011 21:50:04.590693   47456 command_runner.go:130] > crio version 1.29.1
	I1011 21:50:04.590719   47456 command_runner.go:130] > Version:        1.29.1
	I1011 21:50:04.590730   47456 command_runner.go:130] > GitCommit:      unknown
	I1011 21:50:04.590736   47456 command_runner.go:130] > GitCommitDate:  unknown
	I1011 21:50:04.590742   47456 command_runner.go:130] > GitTreeState:   clean
	I1011 21:50:04.590749   47456 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1011 21:50:04.590754   47456 command_runner.go:130] > GoVersion:      go1.21.6
	I1011 21:50:04.590760   47456 command_runner.go:130] > Compiler:       gc
	I1011 21:50:04.590767   47456 command_runner.go:130] > Platform:       linux/amd64
	I1011 21:50:04.590772   47456 command_runner.go:130] > Linkmode:       dynamic
	I1011 21:50:04.590778   47456 command_runner.go:130] > BuildTags:      
	I1011 21:50:04.590785   47456 command_runner.go:130] >   containers_image_ostree_stub
	I1011 21:50:04.590792   47456 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1011 21:50:04.590802   47456 command_runner.go:130] >   btrfs_noversion
	I1011 21:50:04.590810   47456 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1011 21:50:04.590817   47456 command_runner.go:130] >   libdm_no_deferred_remove
	I1011 21:50:04.590823   47456 command_runner.go:130] >   seccomp
	I1011 21:50:04.590831   47456 command_runner.go:130] > LDFlags:          unknown
	I1011 21:50:04.590838   47456 command_runner.go:130] > SeccompEnabled:   true
	I1011 21:50:04.590845   47456 command_runner.go:130] > AppArmorEnabled:  false
	I1011 21:50:04.594466   47456 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 21:50:04.595750   47456 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:50:04.598340   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:04.598679   47456 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:50:04.598715   47456 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:50:04.598921   47456 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 21:50:04.603762   47456 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1011 21:50:04.603882   47456 kubeadm.go:883] updating cluster {Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:50:04.604131   47456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 21:50:04.604210   47456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:50:04.654051   47456 command_runner.go:130] > {
	I1011 21:50:04.654085   47456 command_runner.go:130] >   "images": [
	I1011 21:50:04.654092   47456 command_runner.go:130] >     {
	I1011 21:50:04.654104   47456 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1011 21:50:04.654111   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654121   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1011 21:50:04.654131   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654138   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654157   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1011 21:50:04.654177   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1011 21:50:04.654184   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654192   47456 command_runner.go:130] >       "size": "87190579",
	I1011 21:50:04.654201   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654213   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654226   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654237   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654244   47456 command_runner.go:130] >     },
	I1011 21:50:04.654254   47456 command_runner.go:130] >     {
	I1011 21:50:04.654271   47456 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1011 21:50:04.654283   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654295   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1011 21:50:04.654304   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654314   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654331   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1011 21:50:04.654373   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1011 21:50:04.654389   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654393   47456 command_runner.go:130] >       "size": "94965812",
	I1011 21:50:04.654401   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654409   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654416   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654420   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654426   47456 command_runner.go:130] >     },
	I1011 21:50:04.654430   47456 command_runner.go:130] >     {
	I1011 21:50:04.654436   47456 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1011 21:50:04.654444   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654452   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1011 21:50:04.654456   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654463   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654471   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1011 21:50:04.654481   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1011 21:50:04.654487   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654492   47456 command_runner.go:130] >       "size": "1363676",
	I1011 21:50:04.654499   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654503   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654509   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654514   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654518   47456 command_runner.go:130] >     },
	I1011 21:50:04.654524   47456 command_runner.go:130] >     {
	I1011 21:50:04.654530   47456 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1011 21:50:04.654536   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654542   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1011 21:50:04.654549   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654553   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654563   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1011 21:50:04.654576   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1011 21:50:04.654582   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654587   47456 command_runner.go:130] >       "size": "31470524",
	I1011 21:50:04.654594   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654599   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654605   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654609   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654638   47456 command_runner.go:130] >     },
	I1011 21:50:04.654648   47456 command_runner.go:130] >     {
	I1011 21:50:04.654658   47456 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1011 21:50:04.654668   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654677   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1011 21:50:04.654684   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654690   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654700   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1011 21:50:04.654710   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1011 21:50:04.654716   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654721   47456 command_runner.go:130] >       "size": "63273227",
	I1011 21:50:04.654728   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.654733   47456 command_runner.go:130] >       "username": "nonroot",
	I1011 21:50:04.654739   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654743   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654749   47456 command_runner.go:130] >     },
	I1011 21:50:04.654753   47456 command_runner.go:130] >     {
	I1011 21:50:04.654761   47456 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1011 21:50:04.654766   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654772   47456 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1011 21:50:04.654776   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654780   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654790   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1011 21:50:04.654799   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1011 21:50:04.654805   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654809   47456 command_runner.go:130] >       "size": "149009664",
	I1011 21:50:04.654817   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.654824   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.654828   47456 command_runner.go:130] >       },
	I1011 21:50:04.654834   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654838   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654844   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654848   47456 command_runner.go:130] >     },
	I1011 21:50:04.654852   47456 command_runner.go:130] >     {
	I1011 21:50:04.654861   47456 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1011 21:50:04.654868   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.654873   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1011 21:50:04.654879   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654883   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.654893   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1011 21:50:04.654903   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1011 21:50:04.654910   47456 command_runner.go:130] >       ],
	I1011 21:50:04.654914   47456 command_runner.go:130] >       "size": "95237600",
	I1011 21:50:04.654920   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.654924   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.654935   47456 command_runner.go:130] >       },
	I1011 21:50:04.654942   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.654952   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.654963   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.654972   47456 command_runner.go:130] >     },
	I1011 21:50:04.654982   47456 command_runner.go:130] >     {
	I1011 21:50:04.654996   47456 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1011 21:50:04.655006   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655019   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1011 21:50:04.655030   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655040   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655059   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1011 21:50:04.655070   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1011 21:50:04.655077   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655081   47456 command_runner.go:130] >       "size": "89437508",
	I1011 21:50:04.655087   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.655092   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.655098   47456 command_runner.go:130] >       },
	I1011 21:50:04.655102   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655107   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655111   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.655114   47456 command_runner.go:130] >     },
	I1011 21:50:04.655117   47456 command_runner.go:130] >     {
	I1011 21:50:04.655123   47456 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1011 21:50:04.655126   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655132   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1011 21:50:04.655137   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655143   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655151   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1011 21:50:04.655161   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1011 21:50:04.655167   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655172   47456 command_runner.go:130] >       "size": "92733849",
	I1011 21:50:04.655178   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.655181   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655185   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655189   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.655193   47456 command_runner.go:130] >     },
	I1011 21:50:04.655198   47456 command_runner.go:130] >     {
	I1011 21:50:04.655205   47456 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1011 21:50:04.655211   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655217   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1011 21:50:04.655223   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655227   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655236   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1011 21:50:04.655245   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1011 21:50:04.655252   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655257   47456 command_runner.go:130] >       "size": "68420934",
	I1011 21:50:04.655263   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.655267   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.655274   47456 command_runner.go:130] >       },
	I1011 21:50:04.655279   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655285   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655290   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.655297   47456 command_runner.go:130] >     },
	I1011 21:50:04.655300   47456 command_runner.go:130] >     {
	I1011 21:50:04.655308   47456 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1011 21:50:04.655315   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.655320   47456 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1011 21:50:04.655326   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655331   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.655340   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1011 21:50:04.655349   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1011 21:50:04.655353   47456 command_runner.go:130] >       ],
	I1011 21:50:04.655360   47456 command_runner.go:130] >       "size": "742080",
	I1011 21:50:04.655364   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.655371   47456 command_runner.go:130] >         "value": "65535"
	I1011 21:50:04.655375   47456 command_runner.go:130] >       },
	I1011 21:50:04.655380   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.655390   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.655394   47456 command_runner.go:130] >       "pinned": true
	I1011 21:50:04.655400   47456 command_runner.go:130] >     }
	I1011 21:50:04.655406   47456 command_runner.go:130] >   ]
	I1011 21:50:04.655411   47456 command_runner.go:130] > }
	I1011 21:50:04.655578   47456 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:50:04.655591   47456 crio.go:433] Images already preloaded, skipping extraction
	I1011 21:50:04.655646   47456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:50:04.695803   47456 command_runner.go:130] > {
	I1011 21:50:04.695829   47456 command_runner.go:130] >   "images": [
	I1011 21:50:04.695836   47456 command_runner.go:130] >     {
	I1011 21:50:04.695847   47456 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1011 21:50:04.695853   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.695860   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1011 21:50:04.695866   47456 command_runner.go:130] >       ],
	I1011 21:50:04.695874   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.695886   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1011 21:50:04.695898   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1011 21:50:04.695909   47456 command_runner.go:130] >       ],
	I1011 21:50:04.695916   47456 command_runner.go:130] >       "size": "87190579",
	I1011 21:50:04.695921   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.695928   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.695937   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.695944   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.695950   47456 command_runner.go:130] >     },
	I1011 21:50:04.695955   47456 command_runner.go:130] >     {
	I1011 21:50:04.695965   47456 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1011 21:50:04.695971   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.695979   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1011 21:50:04.695985   47456 command_runner.go:130] >       ],
	I1011 21:50:04.695991   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696003   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1011 21:50:04.696014   47456 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1011 21:50:04.696023   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696030   47456 command_runner.go:130] >       "size": "94965812",
	I1011 21:50:04.696038   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696052   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696061   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696071   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696077   47456 command_runner.go:130] >     },
	I1011 21:50:04.696085   47456 command_runner.go:130] >     {
	I1011 21:50:04.696097   47456 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1011 21:50:04.696104   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696109   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1011 21:50:04.696115   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696119   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696128   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1011 21:50:04.696138   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1011 21:50:04.696145   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696149   47456 command_runner.go:130] >       "size": "1363676",
	I1011 21:50:04.696156   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696162   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696179   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696185   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696189   47456 command_runner.go:130] >     },
	I1011 21:50:04.696195   47456 command_runner.go:130] >     {
	I1011 21:50:04.696201   47456 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1011 21:50:04.696207   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696213   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1011 21:50:04.696219   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696223   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696233   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1011 21:50:04.696245   47456 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1011 21:50:04.696251   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696255   47456 command_runner.go:130] >       "size": "31470524",
	I1011 21:50:04.696261   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696265   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696271   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696275   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696280   47456 command_runner.go:130] >     },
	I1011 21:50:04.696283   47456 command_runner.go:130] >     {
	I1011 21:50:04.696291   47456 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1011 21:50:04.696297   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696302   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1011 21:50:04.696308   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696312   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696321   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1011 21:50:04.696330   47456 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1011 21:50:04.696335   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696339   47456 command_runner.go:130] >       "size": "63273227",
	I1011 21:50:04.696344   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696349   47456 command_runner.go:130] >       "username": "nonroot",
	I1011 21:50:04.696355   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696359   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696365   47456 command_runner.go:130] >     },
	I1011 21:50:04.696370   47456 command_runner.go:130] >     {
	I1011 21:50:04.696379   47456 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1011 21:50:04.696383   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696388   47456 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1011 21:50:04.696393   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696397   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696413   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1011 21:50:04.696423   47456 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1011 21:50:04.696427   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696430   47456 command_runner.go:130] >       "size": "149009664",
	I1011 21:50:04.696434   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696437   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696443   47456 command_runner.go:130] >       },
	I1011 21:50:04.696447   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696451   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696455   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696458   47456 command_runner.go:130] >     },
	I1011 21:50:04.696461   47456 command_runner.go:130] >     {
	I1011 21:50:04.696467   47456 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1011 21:50:04.696470   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696475   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1011 21:50:04.696478   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696482   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696489   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1011 21:50:04.696498   47456 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1011 21:50:04.696503   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696508   47456 command_runner.go:130] >       "size": "95237600",
	I1011 21:50:04.696512   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696516   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696520   47456 command_runner.go:130] >       },
	I1011 21:50:04.696527   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696530   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696537   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696540   47456 command_runner.go:130] >     },
	I1011 21:50:04.696545   47456 command_runner.go:130] >     {
	I1011 21:50:04.696551   47456 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1011 21:50:04.696557   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696562   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1011 21:50:04.696566   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696572   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696584   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1011 21:50:04.696594   47456 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1011 21:50:04.696599   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696604   47456 command_runner.go:130] >       "size": "89437508",
	I1011 21:50:04.696610   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696614   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696619   47456 command_runner.go:130] >       },
	I1011 21:50:04.696623   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696629   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696633   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696638   47456 command_runner.go:130] >     },
	I1011 21:50:04.696641   47456 command_runner.go:130] >     {
	I1011 21:50:04.696647   47456 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1011 21:50:04.696653   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696658   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1011 21:50:04.696664   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696668   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696677   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1011 21:50:04.696688   47456 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1011 21:50:04.696694   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696698   47456 command_runner.go:130] >       "size": "92733849",
	I1011 21:50:04.696704   47456 command_runner.go:130] >       "uid": null,
	I1011 21:50:04.696709   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696715   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696719   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696724   47456 command_runner.go:130] >     },
	I1011 21:50:04.696728   47456 command_runner.go:130] >     {
	I1011 21:50:04.696733   47456 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1011 21:50:04.696739   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696745   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1011 21:50:04.696750   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696754   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696763   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1011 21:50:04.696773   47456 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1011 21:50:04.696780   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696784   47456 command_runner.go:130] >       "size": "68420934",
	I1011 21:50:04.696790   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696795   47456 command_runner.go:130] >         "value": "0"
	I1011 21:50:04.696800   47456 command_runner.go:130] >       },
	I1011 21:50:04.696804   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696810   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696814   47456 command_runner.go:130] >       "pinned": false
	I1011 21:50:04.696817   47456 command_runner.go:130] >     },
	I1011 21:50:04.696823   47456 command_runner.go:130] >     {
	I1011 21:50:04.696829   47456 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1011 21:50:04.696835   47456 command_runner.go:130] >       "repoTags": [
	I1011 21:50:04.696840   47456 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1011 21:50:04.696845   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696849   47456 command_runner.go:130] >       "repoDigests": [
	I1011 21:50:04.696858   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1011 21:50:04.696865   47456 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1011 21:50:04.696870   47456 command_runner.go:130] >       ],
	I1011 21:50:04.696875   47456 command_runner.go:130] >       "size": "742080",
	I1011 21:50:04.696881   47456 command_runner.go:130] >       "uid": {
	I1011 21:50:04.696885   47456 command_runner.go:130] >         "value": "65535"
	I1011 21:50:04.696891   47456 command_runner.go:130] >       },
	I1011 21:50:04.696894   47456 command_runner.go:130] >       "username": "",
	I1011 21:50:04.696900   47456 command_runner.go:130] >       "spec": null,
	I1011 21:50:04.696904   47456 command_runner.go:130] >       "pinned": true
	I1011 21:50:04.696909   47456 command_runner.go:130] >     }
	I1011 21:50:04.696914   47456 command_runner.go:130] >   ]
	I1011 21:50:04.696919   47456 command_runner.go:130] > }
	I1011 21:50:04.697060   47456 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 21:50:04.697074   47456 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:50:04.697081   47456 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.31.1 crio true true} ...
	I1011 21:50:04.697175   47456 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-805849 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:50:04.697241   47456 ssh_runner.go:195] Run: crio config
	I1011 21:50:04.740738   47456 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1011 21:50:04.740767   47456 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1011 21:50:04.740774   47456 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1011 21:50:04.740777   47456 command_runner.go:130] > #
	I1011 21:50:04.740784   47456 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1011 21:50:04.740790   47456 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1011 21:50:04.740796   47456 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1011 21:50:04.740802   47456 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1011 21:50:04.740805   47456 command_runner.go:130] > # reload'.
	I1011 21:50:04.740817   47456 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1011 21:50:04.740826   47456 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1011 21:50:04.740840   47456 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1011 21:50:04.740849   47456 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1011 21:50:04.740857   47456 command_runner.go:130] > [crio]
	I1011 21:50:04.740865   47456 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1011 21:50:04.740873   47456 command_runner.go:130] > # containers images, in this directory.
	I1011 21:50:04.741234   47456 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1011 21:50:04.741270   47456 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1011 21:50:04.741279   47456 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1011 21:50:04.741294   47456 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1011 21:50:04.741301   47456 command_runner.go:130] > # imagestore = ""
	I1011 21:50:04.741312   47456 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1011 21:50:04.741323   47456 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1011 21:50:04.741334   47456 command_runner.go:130] > storage_driver = "overlay"
	I1011 21:50:04.741343   47456 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1011 21:50:04.741355   47456 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1011 21:50:04.741363   47456 command_runner.go:130] > storage_option = [
	I1011 21:50:04.741392   47456 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1011 21:50:04.741410   47456 command_runner.go:130] > ]
	I1011 21:50:04.741418   47456 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1011 21:50:04.741443   47456 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1011 21:50:04.741455   47456 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1011 21:50:04.741467   47456 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1011 21:50:04.741476   47456 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1011 21:50:04.741482   47456 command_runner.go:130] > # always happen on a node reboot
	I1011 21:50:04.741492   47456 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1011 21:50:04.741505   47456 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1011 21:50:04.741517   47456 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1011 21:50:04.741525   47456 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1011 21:50:04.741538   47456 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1011 21:50:04.741553   47456 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1011 21:50:04.741567   47456 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1011 21:50:04.741575   47456 command_runner.go:130] > # internal_wipe = true
	I1011 21:50:04.741584   47456 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1011 21:50:04.741590   47456 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1011 21:50:04.741595   47456 command_runner.go:130] > # internal_repair = false
	I1011 21:50:04.741606   47456 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1011 21:50:04.741618   47456 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1011 21:50:04.741631   47456 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1011 21:50:04.741642   47456 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1011 21:50:04.741651   47456 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1011 21:50:04.741656   47456 command_runner.go:130] > [crio.api]
	I1011 21:50:04.741664   47456 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1011 21:50:04.741671   47456 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1011 21:50:04.741679   47456 command_runner.go:130] > # IP address on which the stream server will listen.
	I1011 21:50:04.741686   47456 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1011 21:50:04.741692   47456 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1011 21:50:04.741697   47456 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1011 21:50:04.741701   47456 command_runner.go:130] > # stream_port = "0"
	I1011 21:50:04.741706   47456 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1011 21:50:04.741713   47456 command_runner.go:130] > # stream_enable_tls = false
	I1011 21:50:04.741719   47456 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1011 21:50:04.741725   47456 command_runner.go:130] > # stream_idle_timeout = ""
	I1011 21:50:04.741735   47456 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1011 21:50:04.741747   47456 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1011 21:50:04.741757   47456 command_runner.go:130] > # minutes.
	I1011 21:50:04.741763   47456 command_runner.go:130] > # stream_tls_cert = ""
	I1011 21:50:04.741779   47456 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1011 21:50:04.741792   47456 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1011 21:50:04.741802   47456 command_runner.go:130] > # stream_tls_key = ""
	I1011 21:50:04.741811   47456 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1011 21:50:04.741824   47456 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1011 21:50:04.741839   47456 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1011 21:50:04.741848   47456 command_runner.go:130] > # stream_tls_ca = ""
	I1011 21:50:04.741860   47456 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1011 21:50:04.741871   47456 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1011 21:50:04.741886   47456 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1011 21:50:04.741897   47456 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1011 21:50:04.741910   47456 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1011 21:50:04.741919   47456 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1011 21:50:04.741923   47456 command_runner.go:130] > [crio.runtime]
	I1011 21:50:04.741929   47456 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1011 21:50:04.741940   47456 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1011 21:50:04.741950   47456 command_runner.go:130] > # "nofile=1024:2048"
	I1011 21:50:04.741960   47456 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1011 21:50:04.741969   47456 command_runner.go:130] > # default_ulimits = [
	I1011 21:50:04.741975   47456 command_runner.go:130] > # ]
	I1011 21:50:04.741987   47456 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1011 21:50:04.741996   47456 command_runner.go:130] > # no_pivot = false
	I1011 21:50:04.742005   47456 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1011 21:50:04.742013   47456 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1011 21:50:04.742017   47456 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1011 21:50:04.742025   47456 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1011 21:50:04.742029   47456 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1011 21:50:04.742037   47456 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1011 21:50:04.742041   47456 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1011 21:50:04.742045   47456 command_runner.go:130] > # Cgroup setting for conmon
	I1011 21:50:04.742051   47456 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1011 21:50:04.742059   47456 command_runner.go:130] > conmon_cgroup = "pod"
	I1011 21:50:04.742068   47456 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1011 21:50:04.742079   47456 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1011 21:50:04.742089   47456 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1011 21:50:04.742098   47456 command_runner.go:130] > conmon_env = [
	I1011 21:50:04.742108   47456 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1011 21:50:04.742121   47456 command_runner.go:130] > ]
	I1011 21:50:04.742131   47456 command_runner.go:130] > # Additional environment variables to set for all the
	I1011 21:50:04.742142   47456 command_runner.go:130] > # containers. These are overridden if set in the
	I1011 21:50:04.742155   47456 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1011 21:50:04.742162   47456 command_runner.go:130] > # default_env = [
	I1011 21:50:04.742170   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742178   47456 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1011 21:50:04.742192   47456 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1011 21:50:04.742214   47456 command_runner.go:130] > # selinux = false
	I1011 21:50:04.742224   47456 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1011 21:50:04.742234   47456 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1011 21:50:04.742246   47456 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1011 21:50:04.742254   47456 command_runner.go:130] > # seccomp_profile = ""
	I1011 21:50:04.742264   47456 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1011 21:50:04.742275   47456 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1011 21:50:04.742288   47456 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1011 21:50:04.742296   47456 command_runner.go:130] > # which might increase security.
	I1011 21:50:04.742303   47456 command_runner.go:130] > # This option is currently deprecated,
	I1011 21:50:04.742316   47456 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1011 21:50:04.742326   47456 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1011 21:50:04.742335   47456 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1011 21:50:04.742349   47456 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1011 21:50:04.742364   47456 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1011 21:50:04.742374   47456 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1011 21:50:04.742382   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.742390   47456 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1011 21:50:04.742398   47456 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1011 21:50:04.742407   47456 command_runner.go:130] > # the cgroup blockio controller.
	I1011 21:50:04.742412   47456 command_runner.go:130] > # blockio_config_file = ""
	I1011 21:50:04.742423   47456 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1011 21:50:04.742433   47456 command_runner.go:130] > # blockio parameters.
	I1011 21:50:04.742442   47456 command_runner.go:130] > # blockio_reload = false
	I1011 21:50:04.742455   47456 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1011 21:50:04.742465   47456 command_runner.go:130] > # irqbalance daemon.
	I1011 21:50:04.742474   47456 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1011 21:50:04.742485   47456 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1011 21:50:04.742498   47456 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1011 21:50:04.742507   47456 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1011 21:50:04.742523   47456 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1011 21:50:04.742537   47456 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1011 21:50:04.742546   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.742557   47456 command_runner.go:130] > # rdt_config_file = ""
	I1011 21:50:04.742569   47456 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1011 21:50:04.742580   47456 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1011 21:50:04.742600   47456 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1011 21:50:04.742609   47456 command_runner.go:130] > # separate_pull_cgroup = ""
	I1011 21:50:04.742630   47456 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1011 21:50:04.742643   47456 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1011 21:50:04.742653   47456 command_runner.go:130] > # will be added.
	I1011 21:50:04.742660   47456 command_runner.go:130] > # default_capabilities = [
	I1011 21:50:04.742669   47456 command_runner.go:130] > # 	"CHOWN",
	I1011 21:50:04.742675   47456 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1011 21:50:04.742681   47456 command_runner.go:130] > # 	"FSETID",
	I1011 21:50:04.742690   47456 command_runner.go:130] > # 	"FOWNER",
	I1011 21:50:04.742696   47456 command_runner.go:130] > # 	"SETGID",
	I1011 21:50:04.742706   47456 command_runner.go:130] > # 	"SETUID",
	I1011 21:50:04.742712   47456 command_runner.go:130] > # 	"SETPCAP",
	I1011 21:50:04.742718   47456 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1011 21:50:04.742725   47456 command_runner.go:130] > # 	"KILL",
	I1011 21:50:04.742731   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742744   47456 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1011 21:50:04.742757   47456 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1011 21:50:04.742766   47456 command_runner.go:130] > # add_inheritable_capabilities = false
	I1011 21:50:04.742778   47456 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1011 21:50:04.742790   47456 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1011 21:50:04.742799   47456 command_runner.go:130] > default_sysctls = [
	I1011 21:50:04.742807   47456 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1011 21:50:04.742814   47456 command_runner.go:130] > ]
	I1011 21:50:04.742822   47456 command_runner.go:130] > # List of devices on the host that a
	I1011 21:50:04.742834   47456 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1011 21:50:04.742843   47456 command_runner.go:130] > # allowed_devices = [
	I1011 21:50:04.742851   47456 command_runner.go:130] > # 	"/dev/fuse",
	I1011 21:50:04.742856   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742863   47456 command_runner.go:130] > # List of additional devices. specified as
	I1011 21:50:04.742878   47456 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1011 21:50:04.742888   47456 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1011 21:50:04.742898   47456 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1011 21:50:04.742907   47456 command_runner.go:130] > # additional_devices = [
	I1011 21:50:04.742912   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742920   47456 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1011 21:50:04.742935   47456 command_runner.go:130] > # cdi_spec_dirs = [
	I1011 21:50:04.742944   47456 command_runner.go:130] > # 	"/etc/cdi",
	I1011 21:50:04.742950   47456 command_runner.go:130] > # 	"/var/run/cdi",
	I1011 21:50:04.742956   47456 command_runner.go:130] > # ]
	I1011 21:50:04.742966   47456 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1011 21:50:04.742979   47456 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1011 21:50:04.742989   47456 command_runner.go:130] > # Defaults to false.
	I1011 21:50:04.742997   47456 command_runner.go:130] > # device_ownership_from_security_context = false
	I1011 21:50:04.743010   47456 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1011 21:50:04.743022   47456 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1011 21:50:04.743031   47456 command_runner.go:130] > # hooks_dir = [
	I1011 21:50:04.743038   47456 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1011 21:50:04.743047   47456 command_runner.go:130] > # ]
	I1011 21:50:04.743059   47456 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1011 21:50:04.743073   47456 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1011 21:50:04.743084   47456 command_runner.go:130] > # its default mounts from the following two files:
	I1011 21:50:04.743089   47456 command_runner.go:130] > #
	I1011 21:50:04.743101   47456 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1011 21:50:04.743115   47456 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1011 21:50:04.743127   47456 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1011 21:50:04.743135   47456 command_runner.go:130] > #
	I1011 21:50:04.743143   47456 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1011 21:50:04.743157   47456 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1011 21:50:04.743170   47456 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1011 21:50:04.743181   47456 command_runner.go:130] > #      only add mounts it finds in this file.
	I1011 21:50:04.743186   47456 command_runner.go:130] > #
	I1011 21:50:04.743198   47456 command_runner.go:130] > # default_mounts_file = ""
	I1011 21:50:04.743209   47456 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1011 21:50:04.743220   47456 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1011 21:50:04.743224   47456 command_runner.go:130] > pids_limit = 1024
	I1011 21:50:04.743230   47456 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1011 21:50:04.743239   47456 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1011 21:50:04.743245   47456 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1011 21:50:04.743253   47456 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1011 21:50:04.743259   47456 command_runner.go:130] > # log_size_max = -1
	I1011 21:50:04.743265   47456 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1011 21:50:04.743272   47456 command_runner.go:130] > # log_to_journald = false
	I1011 21:50:04.743278   47456 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1011 21:50:04.743285   47456 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1011 21:50:04.743308   47456 command_runner.go:130] > # Path to directory for container attach sockets.
	I1011 21:50:04.743321   47456 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1011 21:50:04.743329   47456 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1011 21:50:04.743339   47456 command_runner.go:130] > # bind_mount_prefix = ""
	I1011 21:50:04.743351   47456 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1011 21:50:04.743359   47456 command_runner.go:130] > # read_only = false
	I1011 21:50:04.743369   47456 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1011 21:50:04.743383   47456 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1011 21:50:04.743394   47456 command_runner.go:130] > # live configuration reload.
	I1011 21:50:04.743403   47456 command_runner.go:130] > # log_level = "info"
	I1011 21:50:04.743412   47456 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1011 21:50:04.743423   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.743430   47456 command_runner.go:130] > # log_filter = ""
	I1011 21:50:04.743440   47456 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1011 21:50:04.743452   47456 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1011 21:50:04.743461   47456 command_runner.go:130] > # separated by comma.
	I1011 21:50:04.743473   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743482   47456 command_runner.go:130] > # uid_mappings = ""
	I1011 21:50:04.743493   47456 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1011 21:50:04.743506   47456 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1011 21:50:04.743514   47456 command_runner.go:130] > # separated by comma.
	I1011 21:50:04.743528   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743538   47456 command_runner.go:130] > # gid_mappings = ""
	I1011 21:50:04.743547   47456 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1011 21:50:04.743558   47456 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1011 21:50:04.743564   47456 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1011 21:50:04.743572   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743576   47456 command_runner.go:130] > # minimum_mappable_uid = -1
	I1011 21:50:04.743582   47456 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1011 21:50:04.743590   47456 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1011 21:50:04.743596   47456 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1011 21:50:04.743610   47456 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1011 21:50:04.743619   47456 command_runner.go:130] > # minimum_mappable_gid = -1
	I1011 21:50:04.743628   47456 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1011 21:50:04.743641   47456 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1011 21:50:04.743650   47456 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1011 21:50:04.743663   47456 command_runner.go:130] > # ctr_stop_timeout = 30
	I1011 21:50:04.743676   47456 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1011 21:50:04.743687   47456 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1011 21:50:04.743697   47456 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1011 21:50:04.743707   47456 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1011 21:50:04.743716   47456 command_runner.go:130] > drop_infra_ctr = false
	I1011 21:50:04.743726   47456 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1011 21:50:04.743737   47456 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1011 21:50:04.743749   47456 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1011 21:50:04.743758   47456 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1011 21:50:04.743768   47456 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1011 21:50:04.743776   47456 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1011 21:50:04.743784   47456 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1011 21:50:04.743795   47456 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1011 21:50:04.743804   47456 command_runner.go:130] > # shared_cpuset = ""
	I1011 21:50:04.743816   47456 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1011 21:50:04.743824   47456 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1011 21:50:04.743835   47456 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1011 21:50:04.743846   47456 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1011 21:50:04.743856   47456 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1011 21:50:04.743864   47456 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1011 21:50:04.743877   47456 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1011 21:50:04.743886   47456 command_runner.go:130] > # enable_criu_support = false
	I1011 21:50:04.743894   47456 command_runner.go:130] > # Enable/disable the generation of the container,
	I1011 21:50:04.743906   47456 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1011 21:50:04.743916   47456 command_runner.go:130] > # enable_pod_events = false
	I1011 21:50:04.743927   47456 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1011 21:50:04.743940   47456 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1011 21:50:04.743951   47456 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1011 21:50:04.743960   47456 command_runner.go:130] > # default_runtime = "runc"
	I1011 21:50:04.743969   47456 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1011 21:50:04.743985   47456 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1011 21:50:04.743999   47456 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1011 21:50:04.744009   47456 command_runner.go:130] > # creation as a file is not desired either.
	I1011 21:50:04.744024   47456 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1011 21:50:04.744039   47456 command_runner.go:130] > # the hostname is being managed dynamically.
	I1011 21:50:04.744049   47456 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1011 21:50:04.744056   47456 command_runner.go:130] > # ]
	I1011 21:50:04.744069   47456 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1011 21:50:04.744081   47456 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1011 21:50:04.744088   47456 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1011 21:50:04.744099   47456 command_runner.go:130] > # Each entry in the table should follow the format:
	I1011 21:50:04.744107   47456 command_runner.go:130] > #
	I1011 21:50:04.744114   47456 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1011 21:50:04.744125   47456 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1011 21:50:04.744151   47456 command_runner.go:130] > # runtime_type = "oci"
	I1011 21:50:04.744164   47456 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1011 21:50:04.744179   47456 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1011 21:50:04.744189   47456 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1011 21:50:04.744201   47456 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1011 21:50:04.744210   47456 command_runner.go:130] > # monitor_env = []
	I1011 21:50:04.744217   47456 command_runner.go:130] > # privileged_without_host_devices = false
	I1011 21:50:04.744223   47456 command_runner.go:130] > # allowed_annotations = []
	I1011 21:50:04.744230   47456 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1011 21:50:04.744233   47456 command_runner.go:130] > # Where:
	I1011 21:50:04.744239   47456 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1011 21:50:04.744246   47456 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1011 21:50:04.744252   47456 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1011 21:50:04.744261   47456 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1011 21:50:04.744265   47456 command_runner.go:130] > #   in $PATH.
	I1011 21:50:04.744272   47456 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1011 21:50:04.744277   47456 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1011 21:50:04.744283   47456 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1011 21:50:04.744288   47456 command_runner.go:130] > #   state.
	I1011 21:50:04.744294   47456 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1011 21:50:04.744301   47456 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1011 21:50:04.744307   47456 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1011 21:50:04.744315   47456 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1011 21:50:04.744320   47456 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1011 21:50:04.744329   47456 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1011 21:50:04.744337   47456 command_runner.go:130] > #   The currently recognized values are:
	I1011 21:50:04.744342   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1011 21:50:04.744351   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1011 21:50:04.744361   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1011 21:50:04.744369   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1011 21:50:04.744376   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1011 21:50:04.744399   47456 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1011 21:50:04.744411   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1011 21:50:04.744419   47456 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1011 21:50:04.744425   47456 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1011 21:50:04.744433   47456 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1011 21:50:04.744437   47456 command_runner.go:130] > #   deprecated option "conmon".
	I1011 21:50:04.744444   47456 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1011 21:50:04.744452   47456 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1011 21:50:04.744459   47456 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1011 21:50:04.744473   47456 command_runner.go:130] > #   should be moved to the container's cgroup
	I1011 21:50:04.744483   47456 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1011 21:50:04.744491   47456 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1011 21:50:04.744497   47456 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1011 21:50:04.744504   47456 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1011 21:50:04.744507   47456 command_runner.go:130] > #
	I1011 21:50:04.744512   47456 command_runner.go:130] > # Using the seccomp notifier feature:
	I1011 21:50:04.744517   47456 command_runner.go:130] > #
	I1011 21:50:04.744523   47456 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1011 21:50:04.744531   47456 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1011 21:50:04.744536   47456 command_runner.go:130] > #
	I1011 21:50:04.744542   47456 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1011 21:50:04.744551   47456 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1011 21:50:04.744556   47456 command_runner.go:130] > #
	I1011 21:50:04.744564   47456 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1011 21:50:04.744569   47456 command_runner.go:130] > # feature.
	I1011 21:50:04.744573   47456 command_runner.go:130] > #
	I1011 21:50:04.744580   47456 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1011 21:50:04.744590   47456 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1011 21:50:04.744596   47456 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1011 21:50:04.744604   47456 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1011 21:50:04.744611   47456 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1011 21:50:04.744615   47456 command_runner.go:130] > #
	I1011 21:50:04.744621   47456 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1011 21:50:04.744631   47456 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1011 21:50:04.744637   47456 command_runner.go:130] > #
	I1011 21:50:04.744643   47456 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1011 21:50:04.744650   47456 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1011 21:50:04.744653   47456 command_runner.go:130] > #
	I1011 21:50:04.744658   47456 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1011 21:50:04.744666   47456 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1011 21:50:04.744672   47456 command_runner.go:130] > # limitation.
	I1011 21:50:04.744677   47456 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1011 21:50:04.744681   47456 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1011 21:50:04.744688   47456 command_runner.go:130] > runtime_type = "oci"
	I1011 21:50:04.744692   47456 command_runner.go:130] > runtime_root = "/run/runc"
	I1011 21:50:04.744698   47456 command_runner.go:130] > runtime_config_path = ""
	I1011 21:50:04.744703   47456 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1011 21:50:04.744709   47456 command_runner.go:130] > monitor_cgroup = "pod"
	I1011 21:50:04.744713   47456 command_runner.go:130] > monitor_exec_cgroup = ""
	I1011 21:50:04.744719   47456 command_runner.go:130] > monitor_env = [
	I1011 21:50:04.744725   47456 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1011 21:50:04.744730   47456 command_runner.go:130] > ]
	I1011 21:50:04.744734   47456 command_runner.go:130] > privileged_without_host_devices = false
	I1011 21:50:04.744743   47456 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1011 21:50:04.744750   47456 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1011 21:50:04.744756   47456 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1011 21:50:04.744765   47456 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1011 21:50:04.744772   47456 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1011 21:50:04.744781   47456 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1011 21:50:04.744791   47456 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1011 21:50:04.744802   47456 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1011 21:50:04.744809   47456 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1011 21:50:04.744817   47456 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1011 21:50:04.744823   47456 command_runner.go:130] > # Example:
	I1011 21:50:04.744828   47456 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1011 21:50:04.744832   47456 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1011 21:50:04.744836   47456 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1011 21:50:04.744841   47456 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1011 21:50:04.744845   47456 command_runner.go:130] > # cpuset = 0
	I1011 21:50:04.744848   47456 command_runner.go:130] > # cpushares = "0-1"
	I1011 21:50:04.744852   47456 command_runner.go:130] > # Where:
	I1011 21:50:04.744859   47456 command_runner.go:130] > # The workload name is workload-type.
	I1011 21:50:04.744865   47456 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1011 21:50:04.744870   47456 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1011 21:50:04.744875   47456 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1011 21:50:04.744882   47456 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1011 21:50:04.744893   47456 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1011 21:50:04.744898   47456 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1011 21:50:04.744904   47456 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1011 21:50:04.744908   47456 command_runner.go:130] > # Default value is set to true
	I1011 21:50:04.744912   47456 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1011 21:50:04.744918   47456 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1011 21:50:04.744922   47456 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1011 21:50:04.744926   47456 command_runner.go:130] > # Default value is set to 'false'
	I1011 21:50:04.744930   47456 command_runner.go:130] > # disable_hostport_mapping = false
	I1011 21:50:04.744936   47456 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1011 21:50:04.744941   47456 command_runner.go:130] > #
	I1011 21:50:04.744947   47456 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1011 21:50:04.744954   47456 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1011 21:50:04.744960   47456 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1011 21:50:04.744968   47456 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1011 21:50:04.744973   47456 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1011 21:50:04.744979   47456 command_runner.go:130] > [crio.image]
	I1011 21:50:04.744986   47456 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1011 21:50:04.744992   47456 command_runner.go:130] > # default_transport = "docker://"
	I1011 21:50:04.744998   47456 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1011 21:50:04.745004   47456 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1011 21:50:04.745009   47456 command_runner.go:130] > # global_auth_file = ""
	I1011 21:50:04.745015   47456 command_runner.go:130] > # The image used to instantiate infra containers.
	I1011 21:50:04.745020   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.745029   47456 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1011 21:50:04.745035   47456 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1011 21:50:04.745042   47456 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1011 21:50:04.745047   47456 command_runner.go:130] > # This option supports live configuration reload.
	I1011 21:50:04.745054   47456 command_runner.go:130] > # pause_image_auth_file = ""
	I1011 21:50:04.745059   47456 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1011 21:50:04.745067   47456 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1011 21:50:04.745075   47456 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1011 21:50:04.745083   47456 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1011 21:50:04.745087   47456 command_runner.go:130] > # pause_command = "/pause"
	I1011 21:50:04.745095   47456 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1011 21:50:04.745101   47456 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1011 21:50:04.745108   47456 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1011 21:50:04.745116   47456 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1011 21:50:04.745121   47456 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1011 21:50:04.745129   47456 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1011 21:50:04.745134   47456 command_runner.go:130] > # pinned_images = [
	I1011 21:50:04.745137   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745145   47456 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1011 21:50:04.745151   47456 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1011 21:50:04.745157   47456 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1011 21:50:04.745164   47456 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1011 21:50:04.745169   47456 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1011 21:50:04.745175   47456 command_runner.go:130] > # signature_policy = ""
	I1011 21:50:04.745181   47456 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1011 21:50:04.745190   47456 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1011 21:50:04.745200   47456 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1011 21:50:04.745209   47456 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1011 21:50:04.745216   47456 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1011 21:50:04.745220   47456 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1011 21:50:04.745228   47456 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1011 21:50:04.745234   47456 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1011 21:50:04.745238   47456 command_runner.go:130] > # changing them here.
	I1011 21:50:04.745244   47456 command_runner.go:130] > # insecure_registries = [
	I1011 21:50:04.745247   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745253   47456 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1011 21:50:04.745263   47456 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1011 21:50:04.745267   47456 command_runner.go:130] > # image_volumes = "mkdir"
	I1011 21:50:04.745272   47456 command_runner.go:130] > # Temporary directory to use for storing big files
	I1011 21:50:04.745278   47456 command_runner.go:130] > # big_files_temporary_dir = ""
	I1011 21:50:04.745284   47456 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1011 21:50:04.745290   47456 command_runner.go:130] > # CNI plugins.
	I1011 21:50:04.745293   47456 command_runner.go:130] > [crio.network]
	I1011 21:50:04.745301   47456 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1011 21:50:04.745309   47456 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1011 21:50:04.745316   47456 command_runner.go:130] > # cni_default_network = ""
	I1011 21:50:04.745321   47456 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1011 21:50:04.745327   47456 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1011 21:50:04.745332   47456 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1011 21:50:04.745338   47456 command_runner.go:130] > # plugin_dirs = [
	I1011 21:50:04.745342   47456 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1011 21:50:04.745347   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745353   47456 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1011 21:50:04.745359   47456 command_runner.go:130] > [crio.metrics]
	I1011 21:50:04.745364   47456 command_runner.go:130] > # Globally enable or disable metrics support.
	I1011 21:50:04.745370   47456 command_runner.go:130] > enable_metrics = true
	I1011 21:50:04.745374   47456 command_runner.go:130] > # Specify enabled metrics collectors.
	I1011 21:50:04.745381   47456 command_runner.go:130] > # Per default all metrics are enabled.
	I1011 21:50:04.745387   47456 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1011 21:50:04.745396   47456 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1011 21:50:04.745403   47456 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1011 21:50:04.745406   47456 command_runner.go:130] > # metrics_collectors = [
	I1011 21:50:04.745410   47456 command_runner.go:130] > # 	"operations",
	I1011 21:50:04.745417   47456 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1011 21:50:04.745421   47456 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1011 21:50:04.745428   47456 command_runner.go:130] > # 	"operations_errors",
	I1011 21:50:04.745432   47456 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1011 21:50:04.745438   47456 command_runner.go:130] > # 	"image_pulls_by_name",
	I1011 21:50:04.745442   47456 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1011 21:50:04.745448   47456 command_runner.go:130] > # 	"image_pulls_failures",
	I1011 21:50:04.745455   47456 command_runner.go:130] > # 	"image_pulls_successes",
	I1011 21:50:04.745459   47456 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1011 21:50:04.745465   47456 command_runner.go:130] > # 	"image_layer_reuse",
	I1011 21:50:04.745470   47456 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1011 21:50:04.745476   47456 command_runner.go:130] > # 	"containers_oom_total",
	I1011 21:50:04.745480   47456 command_runner.go:130] > # 	"containers_oom",
	I1011 21:50:04.745493   47456 command_runner.go:130] > # 	"processes_defunct",
	I1011 21:50:04.745501   47456 command_runner.go:130] > # 	"operations_total",
	I1011 21:50:04.745505   47456 command_runner.go:130] > # 	"operations_latency_seconds",
	I1011 21:50:04.745512   47456 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1011 21:50:04.745516   47456 command_runner.go:130] > # 	"operations_errors_total",
	I1011 21:50:04.745523   47456 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1011 21:50:04.745527   47456 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1011 21:50:04.745533   47456 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1011 21:50:04.745537   47456 command_runner.go:130] > # 	"image_pulls_success_total",
	I1011 21:50:04.745544   47456 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1011 21:50:04.745548   47456 command_runner.go:130] > # 	"containers_oom_count_total",
	I1011 21:50:04.745555   47456 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1011 21:50:04.745559   47456 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1011 21:50:04.745567   47456 command_runner.go:130] > # ]
	I1011 21:50:04.745575   47456 command_runner.go:130] > # The port on which the metrics server will listen.
	I1011 21:50:04.745579   47456 command_runner.go:130] > # metrics_port = 9090
	I1011 21:50:04.745587   47456 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1011 21:50:04.745591   47456 command_runner.go:130] > # metrics_socket = ""
	I1011 21:50:04.745597   47456 command_runner.go:130] > # The certificate for the secure metrics server.
	I1011 21:50:04.745604   47456 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1011 21:50:04.745612   47456 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1011 21:50:04.745617   47456 command_runner.go:130] > # certificate on any modification event.
	I1011 21:50:04.745622   47456 command_runner.go:130] > # metrics_cert = ""
	I1011 21:50:04.745628   47456 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1011 21:50:04.745635   47456 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1011 21:50:04.745639   47456 command_runner.go:130] > # metrics_key = ""
	I1011 21:50:04.745644   47456 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1011 21:50:04.745650   47456 command_runner.go:130] > [crio.tracing]
	I1011 21:50:04.745655   47456 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1011 21:50:04.745661   47456 command_runner.go:130] > # enable_tracing = false
	I1011 21:50:04.745666   47456 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1011 21:50:04.745673   47456 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1011 21:50:04.745679   47456 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1011 21:50:04.745686   47456 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1011 21:50:04.745692   47456 command_runner.go:130] > # CRI-O NRI configuration.
	I1011 21:50:04.745699   47456 command_runner.go:130] > [crio.nri]
	I1011 21:50:04.745705   47456 command_runner.go:130] > # Globally enable or disable NRI.
	I1011 21:50:04.745713   47456 command_runner.go:130] > # enable_nri = false
	I1011 21:50:04.745723   47456 command_runner.go:130] > # NRI socket to listen on.
	I1011 21:50:04.745733   47456 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1011 21:50:04.745743   47456 command_runner.go:130] > # NRI plugin directory to use.
	I1011 21:50:04.745750   47456 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1011 21:50:04.745755   47456 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1011 21:50:04.745761   47456 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1011 21:50:04.745766   47456 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1011 21:50:04.745773   47456 command_runner.go:130] > # nri_disable_connections = false
	I1011 21:50:04.745781   47456 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1011 21:50:04.745789   47456 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1011 21:50:04.745794   47456 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1011 21:50:04.745806   47456 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1011 21:50:04.745816   47456 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1011 21:50:04.745825   47456 command_runner.go:130] > [crio.stats]
	I1011 21:50:04.745836   47456 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1011 21:50:04.745847   47456 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1011 21:50:04.745857   47456 command_runner.go:130] > # stats_collection_period = 0
	I1011 21:50:04.745884   47456 command_runner.go:130] ! time="2024-10-11 21:50:04.707810385Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1011 21:50:04.745910   47456 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1011 21:50:04.745970   47456 cni.go:84] Creating CNI manager for ""
	I1011 21:50:04.745983   47456 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1011 21:50:04.745991   47456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:50:04.746012   47456 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-805849 NodeName:multinode-805849 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:50:04.746142   47456 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-805849"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:50:04.746205   47456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:50:04.757014   47456 command_runner.go:130] > kubeadm
	I1011 21:50:04.757035   47456 command_runner.go:130] > kubectl
	I1011 21:50:04.757039   47456 command_runner.go:130] > kubelet
	I1011 21:50:04.757176   47456 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:50:04.757230   47456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 21:50:04.767311   47456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1011 21:50:04.787913   47456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:50:04.805818   47456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1011 21:50:04.823967   47456 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I1011 21:50:04.828363   47456 command_runner.go:130] > 192.168.39.81	control-plane.minikube.internal
	I1011 21:50:04.828451   47456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:50:04.967646   47456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:50:04.982952   47456 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849 for IP: 192.168.39.81
	I1011 21:50:04.982972   47456 certs.go:194] generating shared ca certs ...
	I1011 21:50:04.983002   47456 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:50:04.983170   47456 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 21:50:04.983208   47456 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 21:50:04.983217   47456 certs.go:256] generating profile certs ...
	I1011 21:50:04.983290   47456 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/client.key
	I1011 21:50:04.983353   47456 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.key.9f23dda3
	I1011 21:50:04.983387   47456 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.key
	I1011 21:50:04.983398   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 21:50:04.983411   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1011 21:50:04.983423   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 21:50:04.983435   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 21:50:04.983446   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 21:50:04.983457   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 21:50:04.983469   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 21:50:04.983482   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 21:50:04.983549   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 21:50:04.983580   47456 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 21:50:04.983591   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:50:04.983613   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 21:50:04.983635   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:50:04.983657   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 21:50:04.983696   47456 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 21:50:04.983725   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem -> /usr/share/ca-certificates/18814.pem
	I1011 21:50:04.983740   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> /usr/share/ca-certificates/188142.pem
	I1011 21:50:04.983754   47456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:04.984325   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:50:05.012149   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:50:05.039429   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:50:05.066494   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:50:05.094714   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 21:50:05.122683   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:50:05.150244   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:50:05.178423   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/multinode-805849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 21:50:05.205223   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 21:50:05.231808   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 21:50:05.258475   47456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:50:05.285274   47456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:50:05.303144   47456 ssh_runner.go:195] Run: openssl version
	I1011 21:50:05.309635   47456 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1011 21:50:05.309706   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 21:50:05.322175   47456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.327057   47456 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.327090   47456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.327137   47456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 21:50:05.333045   47456 command_runner.go:130] > 51391683
	I1011 21:50:05.333119   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 21:50:05.343642   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 21:50:05.355516   47456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.360270   47456 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.360303   47456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.360341   47456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 21:50:05.367234   47456 command_runner.go:130] > 3ec20f2e
	I1011 21:50:05.367303   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:50:05.378375   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:50:05.390060   47456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.394761   47456 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.394795   47456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.394884   47456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:50:05.400863   47456 command_runner.go:130] > b5213941
	I1011 21:50:05.400935   47456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:50:05.411520   47456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:50:05.416166   47456 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:50:05.416193   47456 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1011 21:50:05.416201   47456 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I1011 21:50:05.416210   47456 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1011 21:50:05.416219   47456 command_runner.go:130] > Access: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416225   47456 command_runner.go:130] > Modify: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416232   47456 command_runner.go:130] > Change: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416241   47456 command_runner.go:130] >  Birth: 2024-10-11 21:43:16.681258034 +0000
	I1011 21:50:05.416304   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 21:50:05.422448   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.422523   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 21:50:05.428573   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.428679   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 21:50:05.434925   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.435007   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 21:50:05.441318   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.441392   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 21:50:05.448144   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.448350   47456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 21:50:05.454536   47456 command_runner.go:130] > Certificate will not expire
	I1011 21:50:05.454604   47456 kubeadm.go:392] StartCluster: {Name:multinode-805849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-805849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.34 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:50:05.454748   47456 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 21:50:05.454807   47456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:50:05.499946   47456 command_runner.go:130] > 95f70c75ea04f7589296a1c74af42977248076a1620b4ffce857606f4db48bd1
	I1011 21:50:05.499978   47456 command_runner.go:130] > a3ad8aa85c33e81e0a18337f1f571cd4b9f5fac4c3cd1649e464f81dbce15f22
	I1011 21:50:05.499986   47456 command_runner.go:130] > d9c5d9bef725aecac0711fb53c13ab9e41bd59afed1d16e72b20921a5fe48a35
	I1011 21:50:05.499995   47456 command_runner.go:130] > c8afbfb4ddae8d530502fba1ab7981ad2ff910a55a88375f34dba4a8f128bd75
	I1011 21:50:05.500002   47456 command_runner.go:130] > 07aaa90bbf4d1334f0e1cf2b47af81e11bb502f70b04f1d0f7cb3cbb9b8ad1e2
	I1011 21:50:05.500009   47456 command_runner.go:130] > cf4c036abc4a58d75b93613c36bf0387ec672f5134c1eb86fbbf37d0cf82de04
	I1011 21:50:05.500016   47456 command_runner.go:130] > bcf0281c7e55c9969ad85223b8ef6f7ea01338f3df18d2724c78ff1a23df04a2
	I1011 21:50:05.500025   47456 command_runner.go:130] > 8d0b183bb85d1b21849642f36896ea90b243d3938f86fa02c9c561696703abb5
	I1011 21:50:05.500047   47456 cri.go:89] found id: "95f70c75ea04f7589296a1c74af42977248076a1620b4ffce857606f4db48bd1"
	I1011 21:50:05.500058   47456 cri.go:89] found id: "a3ad8aa85c33e81e0a18337f1f571cd4b9f5fac4c3cd1649e464f81dbce15f22"
	I1011 21:50:05.500062   47456 cri.go:89] found id: "d9c5d9bef725aecac0711fb53c13ab9e41bd59afed1d16e72b20921a5fe48a35"
	I1011 21:50:05.500075   47456 cri.go:89] found id: "c8afbfb4ddae8d530502fba1ab7981ad2ff910a55a88375f34dba4a8f128bd75"
	I1011 21:50:05.500080   47456 cri.go:89] found id: "07aaa90bbf4d1334f0e1cf2b47af81e11bb502f70b04f1d0f7cb3cbb9b8ad1e2"
	I1011 21:50:05.500092   47456 cri.go:89] found id: "cf4c036abc4a58d75b93613c36bf0387ec672f5134c1eb86fbbf37d0cf82de04"
	I1011 21:50:05.500099   47456 cri.go:89] found id: "bcf0281c7e55c9969ad85223b8ef6f7ea01338f3df18d2724c78ff1a23df04a2"
	I1011 21:50:05.500106   47456 cri.go:89] found id: "8d0b183bb85d1b21849642f36896ea90b243d3938f86fa02c9c561696703abb5"
	I1011 21:50:05.500111   47456 cri.go:89] found id: ""
	I1011 21:50:05.500165   47456 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-805849 -n multinode-805849
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-805849 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.06s)

                                                
                                    
x
+
TestPreload (176.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-730152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-730152 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.344362634s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-730152 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-730152 image pull gcr.io/k8s-minikube/busybox: (3.749968044s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-730152
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-730152: (7.285128835s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-730152 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1011 22:00:24.492139   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-730152 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.871993618s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-730152 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-11 22:01:08.388327408 +0000 UTC m=+3788.402684919
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-730152 -n test-preload-730152
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-730152 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-730152 logs -n 25: (1.095070702s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849 sudo cat                                       | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m03_multinode-805849.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt                       | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m02:/home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n                                                                 | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | multinode-805849-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-805849 ssh -n multinode-805849-m02 sudo cat                                   | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-805849 node stop m03                                                          | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:45 UTC |
	| node    | multinode-805849 node start                                                             | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:45 UTC | 11 Oct 24 21:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:46 UTC |                     |
	| stop    | -p multinode-805849                                                                     | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:46 UTC |                     |
	| start   | -p multinode-805849                                                                     | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC |                     |
	| node    | multinode-805849 node delete                                                            | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC | 11 Oct 24 21:51 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-805849 stop                                                                   | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:51 UTC |                     |
	| start   | -p multinode-805849                                                                     | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:54 UTC | 11 Oct 24 21:57 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-805849                                                                | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:57 UTC |                     |
	| start   | -p multinode-805849-m02                                                                 | multinode-805849-m02 | jenkins | v1.34.0 | 11 Oct 24 21:57 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-805849-m03                                                                 | multinode-805849-m03 | jenkins | v1.34.0 | 11 Oct 24 21:57 UTC | 11 Oct 24 21:58 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-805849                                                                 | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:58 UTC |                     |
	| delete  | -p multinode-805849-m03                                                                 | multinode-805849-m03 | jenkins | v1.34.0 | 11 Oct 24 21:58 UTC | 11 Oct 24 21:58 UTC |
	| delete  | -p multinode-805849                                                                     | multinode-805849     | jenkins | v1.34.0 | 11 Oct 24 21:58 UTC | 11 Oct 24 21:58 UTC |
	| start   | -p test-preload-730152                                                                  | test-preload-730152  | jenkins | v1.34.0 | 11 Oct 24 21:58 UTC | 11 Oct 24 21:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-730152 image pull                                                          | test-preload-730152  | jenkins | v1.34.0 | 11 Oct 24 21:59 UTC | 11 Oct 24 21:59 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-730152                                                                  | test-preload-730152  | jenkins | v1.34.0 | 11 Oct 24 21:59 UTC | 11 Oct 24 21:59 UTC |
	| start   | -p test-preload-730152                                                                  | test-preload-730152  | jenkins | v1.34.0 | 11 Oct 24 21:59 UTC | 11 Oct 24 22:01 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-730152 image list                                                          | test-preload-730152  | jenkins | v1.34.0 | 11 Oct 24 22:01 UTC | 11 Oct 24 22:01 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:59:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:59:56.343682   52244 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:59:56.343777   52244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:59:56.343785   52244 out.go:358] Setting ErrFile to fd 2...
	I1011 21:59:56.343789   52244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:59:56.343956   52244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:59:56.344438   52244 out.go:352] Setting JSON to false
	I1011 21:59:56.345253   52244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6141,"bootTime":1728677855,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:59:56.345339   52244 start.go:139] virtualization: kvm guest
	I1011 21:59:56.347224   52244 out.go:177] * [test-preload-730152] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:59:56.348456   52244 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:59:56.348482   52244 notify.go:220] Checking for updates...
	I1011 21:59:56.350583   52244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:59:56.351888   52244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:59:56.352995   52244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:59:56.354006   52244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:59:56.355059   52244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:59:56.356414   52244 config.go:182] Loaded profile config "test-preload-730152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1011 21:59:56.356825   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:59:56.356889   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:59:56.371694   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I1011 21:59:56.372057   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:59:56.372535   52244 main.go:141] libmachine: Using API Version  1
	I1011 21:59:56.372554   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:59:56.372889   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:59:56.373057   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 21:59:56.374580   52244 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 21:59:56.375605   52244 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:59:56.375908   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:59:56.375944   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:59:56.389729   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1011 21:59:56.390086   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:59:56.390501   52244 main.go:141] libmachine: Using API Version  1
	I1011 21:59:56.390523   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:59:56.390827   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:59:56.391015   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 21:59:56.422911   52244 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 21:59:56.423975   52244 start.go:297] selected driver: kvm2
	I1011 21:59:56.423988   52244 start.go:901] validating driver "kvm2" against &{Name:test-preload-730152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-730152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:59:56.424084   52244 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:59:56.425002   52244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:59:56.425090   52244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 21:59:56.438722   52244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 21:59:56.439042   52244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:59:56.439073   52244 cni.go:84] Creating CNI manager for ""
	I1011 21:59:56.439120   52244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 21:59:56.439165   52244 start.go:340] cluster config:
	{Name:test-preload-730152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-730152 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:59:56.439252   52244 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:59:56.440924   52244 out.go:177] * Starting "test-preload-730152" primary control-plane node in "test-preload-730152" cluster
	I1011 21:59:56.442289   52244 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1011 21:59:57.079764   52244 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1011 21:59:57.079792   52244 cache.go:56] Caching tarball of preloaded images
	I1011 21:59:57.079927   52244 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1011 21:59:57.081781   52244 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1011 21:59:57.082986   52244 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1011 21:59:57.281122   52244 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1011 22:00:11.046446   52244 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1011 22:00:11.046555   52244 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1011 22:00:11.884264   52244 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1011 22:00:11.884394   52244 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/config.json ...
	I1011 22:00:11.884646   52244 start.go:360] acquireMachinesLock for test-preload-730152: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:00:11.884728   52244 start.go:364] duration metric: took 55.63µs to acquireMachinesLock for "test-preload-730152"
	I1011 22:00:11.884750   52244 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:00:11.884758   52244 fix.go:54] fixHost starting: 
	I1011 22:00:11.885022   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:00:11.885066   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:00:11.899454   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I1011 22:00:11.899914   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:00:11.900474   52244 main.go:141] libmachine: Using API Version  1
	I1011 22:00:11.900503   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:00:11.900797   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:00:11.900990   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:11.901110   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetState
	I1011 22:00:11.902673   52244 fix.go:112] recreateIfNeeded on test-preload-730152: state=Stopped err=<nil>
	I1011 22:00:11.902695   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	W1011 22:00:11.902826   52244 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:00:11.904757   52244 out.go:177] * Restarting existing kvm2 VM for "test-preload-730152" ...
	I1011 22:00:11.906248   52244 main.go:141] libmachine: (test-preload-730152) Calling .Start
	I1011 22:00:11.906423   52244 main.go:141] libmachine: (test-preload-730152) Ensuring networks are active...
	I1011 22:00:11.907118   52244 main.go:141] libmachine: (test-preload-730152) Ensuring network default is active
	I1011 22:00:11.907386   52244 main.go:141] libmachine: (test-preload-730152) Ensuring network mk-test-preload-730152 is active
	I1011 22:00:11.907690   52244 main.go:141] libmachine: (test-preload-730152) Getting domain xml...
	I1011 22:00:11.908324   52244 main.go:141] libmachine: (test-preload-730152) Creating domain...
	I1011 22:00:13.113152   52244 main.go:141] libmachine: (test-preload-730152) Waiting to get IP...
	I1011 22:00:13.114205   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:13.114638   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:13.114694   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:13.114591   52329 retry.go:31] will retry after 255.143763ms: waiting for machine to come up
	I1011 22:00:13.371243   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:13.371782   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:13.371806   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:13.371742   52329 retry.go:31] will retry after 351.199494ms: waiting for machine to come up
	I1011 22:00:13.724260   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:13.724731   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:13.724754   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:13.724691   52329 retry.go:31] will retry after 465.927619ms: waiting for machine to come up
	I1011 22:00:14.192244   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:14.192815   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:14.192839   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:14.192764   52329 retry.go:31] will retry after 458.243297ms: waiting for machine to come up
	I1011 22:00:14.652296   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:14.652835   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:14.652862   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:14.652780   52329 retry.go:31] will retry after 581.950701ms: waiting for machine to come up
	I1011 22:00:15.236642   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:15.236994   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:15.237019   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:15.236957   52329 retry.go:31] will retry after 850.921504ms: waiting for machine to come up
	I1011 22:00:16.089097   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:16.089511   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:16.089541   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:16.089473   52329 retry.go:31] will retry after 956.061462ms: waiting for machine to come up
	I1011 22:00:17.047319   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:17.047764   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:17.047817   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:17.047697   52329 retry.go:31] will retry after 1.205860921s: waiting for machine to come up
	I1011 22:00:18.254972   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:18.255421   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:18.255455   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:18.255358   52329 retry.go:31] will retry after 1.553772043s: waiting for machine to come up
	I1011 22:00:19.811035   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:19.811457   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:19.811481   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:19.811414   52329 retry.go:31] will retry after 2.243751484s: waiting for machine to come up
	I1011 22:00:22.056573   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:22.056975   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:22.056993   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:22.056947   52329 retry.go:31] will retry after 2.038176596s: waiting for machine to come up
	I1011 22:00:24.098123   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:24.098497   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:24.098514   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:24.098464   52329 retry.go:31] will retry after 3.02578394s: waiting for machine to come up
	I1011 22:00:27.125227   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:27.125690   52244 main.go:141] libmachine: (test-preload-730152) DBG | unable to find current IP address of domain test-preload-730152 in network mk-test-preload-730152
	I1011 22:00:27.125729   52244 main.go:141] libmachine: (test-preload-730152) DBG | I1011 22:00:27.125639   52329 retry.go:31] will retry after 4.218637266s: waiting for machine to come up
	I1011 22:00:31.347766   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.348270   52244 main.go:141] libmachine: (test-preload-730152) Found IP for machine: 192.168.39.186
	I1011 22:00:31.348295   52244 main.go:141] libmachine: (test-preload-730152) Reserving static IP address...
	I1011 22:00:31.348313   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has current primary IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.348644   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "test-preload-730152", mac: "52:54:00:5d:ce:db", ip: "192.168.39.186"} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.348676   52244 main.go:141] libmachine: (test-preload-730152) DBG | skip adding static IP to network mk-test-preload-730152 - found existing host DHCP lease matching {name: "test-preload-730152", mac: "52:54:00:5d:ce:db", ip: "192.168.39.186"}
	I1011 22:00:31.348690   52244 main.go:141] libmachine: (test-preload-730152) Reserved static IP address: 192.168.39.186
	I1011 22:00:31.348705   52244 main.go:141] libmachine: (test-preload-730152) Waiting for SSH to be available...
	I1011 22:00:31.348720   52244 main.go:141] libmachine: (test-preload-730152) DBG | Getting to WaitForSSH function...
	I1011 22:00:31.351132   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.351460   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.351488   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.351615   52244 main.go:141] libmachine: (test-preload-730152) DBG | Using SSH client type: external
	I1011 22:00:31.351642   52244 main.go:141] libmachine: (test-preload-730152) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa (-rw-------)
	I1011 22:00:31.351675   52244 main.go:141] libmachine: (test-preload-730152) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:00:31.351689   52244 main.go:141] libmachine: (test-preload-730152) DBG | About to run SSH command:
	I1011 22:00:31.351702   52244 main.go:141] libmachine: (test-preload-730152) DBG | exit 0
	I1011 22:00:31.478574   52244 main.go:141] libmachine: (test-preload-730152) DBG | SSH cmd err, output: <nil>: 
	I1011 22:00:31.478957   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetConfigRaw
	I1011 22:00:31.479533   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetIP
	I1011 22:00:31.482062   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.482431   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.482453   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.482726   52244 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/config.json ...
	I1011 22:00:31.482900   52244 machine.go:93] provisionDockerMachine start ...
	I1011 22:00:31.482918   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:31.483092   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:31.485098   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.485427   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.485471   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.485586   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:31.485754   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:31.485879   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:31.485991   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:31.486107   52244 main.go:141] libmachine: Using SSH client type: native
	I1011 22:00:31.486290   52244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1011 22:00:31.486300   52244 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:00:31.598747   52244 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:00:31.598775   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetMachineName
	I1011 22:00:31.598989   52244 buildroot.go:166] provisioning hostname "test-preload-730152"
	I1011 22:00:31.599014   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetMachineName
	I1011 22:00:31.599191   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:31.601662   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.601945   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.601982   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.602110   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:31.602307   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:31.602484   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:31.602633   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:31.602806   52244 main.go:141] libmachine: Using SSH client type: native
	I1011 22:00:31.602972   52244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1011 22:00:31.602985   52244 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-730152 && echo "test-preload-730152" | sudo tee /etc/hostname
	I1011 22:00:31.732058   52244 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-730152
	
	I1011 22:00:31.732090   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:31.734667   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.735006   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.735038   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.735167   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:31.735335   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:31.735469   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:31.735568   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:31.735719   52244 main.go:141] libmachine: Using SSH client type: native
	I1011 22:00:31.735914   52244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1011 22:00:31.735937   52244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-730152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-730152/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-730152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:00:31.855519   52244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:00:31.855547   52244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:00:31.855585   52244 buildroot.go:174] setting up certificates
	I1011 22:00:31.855594   52244 provision.go:84] configureAuth start
	I1011 22:00:31.855604   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetMachineName
	I1011 22:00:31.855875   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetIP
	I1011 22:00:31.858536   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.858886   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.858915   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.859047   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:31.861059   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.861334   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:31.861360   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:31.861432   52244 provision.go:143] copyHostCerts
	I1011 22:00:31.861496   52244 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:00:31.861514   52244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:00:31.861591   52244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:00:31.861684   52244 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:00:31.861694   52244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:00:31.861720   52244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:00:31.861772   52244 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:00:31.861779   52244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:00:31.861798   52244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:00:31.861849   52244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.test-preload-730152 san=[127.0.0.1 192.168.39.186 localhost minikube test-preload-730152]
	I1011 22:00:32.059855   52244 provision.go:177] copyRemoteCerts
	I1011 22:00:32.059922   52244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:00:32.059949   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:32.062474   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.062750   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.062778   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.062937   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:32.063150   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.063351   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:32.063477   52244 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa Username:docker}
	I1011 22:00:32.148380   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:00:32.172010   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:00:32.194704   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:00:32.216992   52244 provision.go:87] duration metric: took 361.386958ms to configureAuth
	I1011 22:00:32.217020   52244 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:00:32.217193   52244 config.go:182] Loaded profile config "test-preload-730152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1011 22:00:32.217276   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:32.220019   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.220408   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.220440   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.220634   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:32.220850   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.220985   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.221173   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:32.221318   52244 main.go:141] libmachine: Using SSH client type: native
	I1011 22:00:32.221486   52244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1011 22:00:32.221500   52244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:00:32.444577   52244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:00:32.444604   52244 machine.go:96] duration metric: took 961.691675ms to provisionDockerMachine
	I1011 22:00:32.444616   52244 start.go:293] postStartSetup for "test-preload-730152" (driver="kvm2")
	I1011 22:00:32.444625   52244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:00:32.444639   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:32.444916   52244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:00:32.444949   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:32.447705   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.448051   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.448070   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.448232   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:32.448415   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.448575   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:32.448718   52244 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa Username:docker}
	I1011 22:00:32.533499   52244 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:00:32.537606   52244 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:00:32.537627   52244 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:00:32.537697   52244 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:00:32.537802   52244 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:00:32.537923   52244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:00:32.546972   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:00:32.570190   52244 start.go:296] duration metric: took 125.562695ms for postStartSetup
	I1011 22:00:32.570229   52244 fix.go:56] duration metric: took 20.685471194s for fixHost
	I1011 22:00:32.570261   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:32.572905   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.573274   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.573315   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.573477   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:32.573658   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.573784   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.573939   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:32.574091   52244 main.go:141] libmachine: Using SSH client type: native
	I1011 22:00:32.574261   52244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1011 22:00:32.574275   52244 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:00:32.687264   52244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728684032.648355285
	
	I1011 22:00:32.687302   52244 fix.go:216] guest clock: 1728684032.648355285
	I1011 22:00:32.687310   52244 fix.go:229] Guest: 2024-10-11 22:00:32.648355285 +0000 UTC Remote: 2024-10-11 22:00:32.570233854 +0000 UTC m=+36.263066727 (delta=78.121431ms)
	I1011 22:00:32.687327   52244 fix.go:200] guest clock delta is within tolerance: 78.121431ms
	I1011 22:00:32.687331   52244 start.go:83] releasing machines lock for "test-preload-730152", held for 20.802591122s
	I1011 22:00:32.687349   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:32.687578   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetIP
	I1011 22:00:32.689987   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.690305   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.690331   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.690480   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:32.690886   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:32.691018   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:32.691118   52244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:00:32.691166   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:32.691172   52244 ssh_runner.go:195] Run: cat /version.json
	I1011 22:00:32.691202   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:32.693671   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.693932   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.693999   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.694025   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.694153   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:32.694304   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.694312   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:32.694331   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:32.694473   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:32.694491   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:32.694644   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:32.694651   52244 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa Username:docker}
	I1011 22:00:32.694760   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:32.694906   52244 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa Username:docker}
	I1011 22:00:32.794373   52244 ssh_runner.go:195] Run: systemctl --version
	I1011 22:00:32.800103   52244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:00:32.941406   52244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:00:32.949270   52244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:00:32.949328   52244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:00:32.966114   52244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:00:32.966137   52244 start.go:495] detecting cgroup driver to use...
	I1011 22:00:32.966195   52244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:00:32.982130   52244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:00:32.996209   52244 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:00:32.996263   52244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:00:33.009783   52244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:00:33.022835   52244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:00:33.130885   52244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:00:33.295394   52244 docker.go:233] disabling docker service ...
	I1011 22:00:33.295458   52244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:00:33.310137   52244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:00:33.322893   52244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:00:33.440380   52244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:00:33.552926   52244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:00:33.567044   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:00:33.585211   52244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1011 22:00:33.585270   52244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.595625   52244 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:00:33.595695   52244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.605769   52244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.616067   52244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.626322   52244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:00:33.636939   52244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.647110   52244 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.664045   52244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:00:33.674747   52244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:00:33.684090   52244 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:00:33.684154   52244 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:00:33.697446   52244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:00:33.706930   52244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:00:33.822035   52244 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:00:33.918610   52244 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:00:33.918689   52244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:00:33.923821   52244 start.go:563] Will wait 60s for crictl version
	I1011 22:00:33.923874   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:33.927527   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:00:33.969099   52244 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:00:33.969192   52244 ssh_runner.go:195] Run: crio --version
	I1011 22:00:33.997993   52244 ssh_runner.go:195] Run: crio --version
	I1011 22:00:34.027168   52244 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1011 22:00:34.028364   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetIP
	I1011 22:00:34.030916   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:34.031208   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:34.031229   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:34.031420   52244 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:00:34.035400   52244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:00:34.047855   52244 kubeadm.go:883] updating cluster {Name:test-preload-730152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-730152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:00:34.047951   52244 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1011 22:00:34.048030   52244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:00:34.085469   52244 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1011 22:00:34.085530   52244 ssh_runner.go:195] Run: which lz4
	I1011 22:00:34.089433   52244 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:00:34.093493   52244 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:00:34.093515   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1011 22:00:35.610678   52244 crio.go:462] duration metric: took 1.521267635s to copy over tarball
	I1011 22:00:35.610752   52244 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:00:37.932910   52244 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.322127306s)
	I1011 22:00:37.932946   52244 crio.go:469] duration metric: took 2.322241063s to extract the tarball
	I1011 22:00:37.932956   52244 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:00:37.974027   52244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:00:38.013287   52244 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1011 22:00:38.013310   52244 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:00:38.013360   52244 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:00:38.013404   52244 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:38.013421   52244 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:38.013447   52244 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:38.013468   52244 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1011 22:00:38.013510   52244 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:38.013425   52244 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:38.013404   52244 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.014952   52244 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.014954   52244 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1011 22:00:38.014954   52244 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:00:38.014963   52244 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:38.015035   52244 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:38.015036   52244 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:38.014954   52244 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:38.014955   52244 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:38.201048   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.232652   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1011 22:00:38.243127   52244 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1011 22:00:38.243163   52244 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.243202   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.277964   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.278007   52244 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1011 22:00:38.278045   52244 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1011 22:00:38.278083   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.310730   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1011 22:00:38.310870   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.351742   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:38.357439   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1011 22:00:38.357461   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1011 22:00:38.364232   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:38.366985   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:38.367301   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:38.394346   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:38.489062   52244 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1011 22:00:38.489102   52244 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:38.489149   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.489173   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1011 22:00:38.489279   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1011 22:00:38.489284   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1011 22:00:38.526742   52244 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1011 22:00:38.526771   52244 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1011 22:00:38.526791   52244 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:38.526799   52244 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:38.526827   52244 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1011 22:00:38.526839   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.526856   52244 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:38.526839   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.526908   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.526897   52244 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1011 22:00:38.526946   52244 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:38.526966   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:38.526981   52244 ssh_runner.go:195] Run: which crictl
	I1011 22:00:38.563367   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1011 22:00:38.563422   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1011 22:00:38.563454   52244 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1011 22:00:38.563501   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1011 22:00:38.563520   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:38.563502   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1011 22:00:38.563577   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:38.563600   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:38.603775   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:38.603812   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:39.198605   52244 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:00:41.821636   52244 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.258109711s)
	I1011 22:00:41.821670   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1011 22:00:41.821765   52244 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.258165129s)
	I1011 22:00:41.821822   52244 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.258279383s)
	I1011 22:00:41.821835   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:41.821860   52244 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.258238702s)
	I1011 22:00:41.821864   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:41.821917   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:41.821917   52244 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.258332222s)
	I1011 22:00:41.821966   52244 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.218161141s)
	I1011 22:00:41.821973   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1011 22:00:41.821981   52244 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1011 22:00:41.821998   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:41.822007   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1011 22:00:41.822034   52244 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.218204834s)
	I1011 22:00:41.822070   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1011 22:00:41.822100   52244 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.623447976s)
	I1011 22:00:42.067064   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 22:00:42.067077   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1011 22:00:42.067132   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1011 22:00:42.067206   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1011 22:00:42.067275   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1011 22:00:42.067330   52244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1011 22:00:42.067356   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1011 22:00:42.145902   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1011 22:00:42.145994   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1011 22:00:42.145999   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1011 22:00:42.146102   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1011 22:00:42.153092   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1011 22:00:42.153139   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1011 22:00:42.153151   52244 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1011 22:00:42.153189   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1011 22:00:42.153196   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1011 22:00:42.153216   52244 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1011 22:00:42.153280   52244 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1011 22:00:42.156123   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1011 22:00:42.156536   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1011 22:00:42.911160   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1011 22:00:42.911196   52244 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1011 22:00:42.911252   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1011 22:00:42.911300   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1011 22:00:42.911251   52244 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1011 22:00:43.360850   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1011 22:00:43.360901   52244 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1011 22:00:43.360972   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1011 22:00:44.108894   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1011 22:00:44.108943   52244 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1011 22:00:44.109008   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1011 22:00:44.449955   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1011 22:00:44.449993   52244 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1011 22:00:44.450039   52244 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1011 22:00:46.598451   52244 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.148382234s)
	I1011 22:00:46.598481   52244 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1011 22:00:46.598506   52244 cache_images.go:123] Successfully loaded all cached images
	I1011 22:00:46.598510   52244 cache_images.go:92] duration metric: took 8.585190322s to LoadCachedImages
	I1011 22:00:46.598519   52244 kubeadm.go:934] updating node { 192.168.39.186 8443 v1.24.4 crio true true} ...
	I1011 22:00:46.598631   52244 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-730152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-730152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:00:46.598711   52244 ssh_runner.go:195] Run: crio config
	I1011 22:00:46.650326   52244 cni.go:84] Creating CNI manager for ""
	I1011 22:00:46.650349   52244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:00:46.650359   52244 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:00:46.650375   52244 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-730152 NodeName:test-preload-730152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:00:46.650495   52244 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-730152"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:00:46.650564   52244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1011 22:00:46.660938   52244 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:00:46.661000   52244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:00:46.670681   52244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1011 22:00:46.687461   52244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:00:46.704068   52244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1011 22:00:46.721349   52244 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I1011 22:00:46.725103   52244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:00:46.737210   52244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:00:46.871077   52244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:00:46.888784   52244 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152 for IP: 192.168.39.186
	I1011 22:00:46.888804   52244 certs.go:194] generating shared ca certs ...
	I1011 22:00:46.888821   52244 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:00:46.888957   52244 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:00:46.888997   52244 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:00:46.889008   52244 certs.go:256] generating profile certs ...
	I1011 22:00:46.889109   52244 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/client.key
	I1011 22:00:46.889175   52244 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/apiserver.key.c5cc8c6b
	I1011 22:00:46.889222   52244 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/proxy-client.key
	I1011 22:00:46.889359   52244 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:00:46.889399   52244 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:00:46.889409   52244 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:00:46.889429   52244 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:00:46.889464   52244 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:00:46.889488   52244 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:00:46.889531   52244 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:00:46.890242   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:00:46.935244   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:00:46.979947   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:00:47.009253   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:00:47.039980   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:00:47.066469   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:00:47.091846   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:00:47.125696   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:00:47.150943   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:00:47.173421   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:00:47.195670   52244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:00:47.218176   52244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:00:47.234608   52244 ssh_runner.go:195] Run: openssl version
	I1011 22:00:47.240204   52244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:00:47.250996   52244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:00:47.255890   52244 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:00:47.255938   52244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:00:47.261702   52244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:00:47.272894   52244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:00:47.283759   52244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:00:47.288086   52244 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:00:47.288148   52244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:00:47.293734   52244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:00:47.304807   52244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:00:47.315853   52244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:00:47.320402   52244 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:00:47.320473   52244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:00:47.325982   52244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:00:47.337330   52244 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:00:47.342062   52244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:00:47.348091   52244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:00:47.353876   52244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:00:47.359828   52244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:00:47.365496   52244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:00:47.371252   52244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:00:47.377140   52244 kubeadm.go:392] StartCluster: {Name:test-preload-730152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-730152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:00:47.377220   52244 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:00:47.377259   52244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:00:47.422317   52244 cri.go:89] found id: ""
	I1011 22:00:47.422404   52244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:00:47.433310   52244 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:00:47.433339   52244 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:00:47.433385   52244 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:00:47.443985   52244 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:00:47.444393   52244 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-730152" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:00:47.444563   52244 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-730152" cluster setting kubeconfig missing "test-preload-730152" context setting]
	I1011 22:00:47.444834   52244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:00:47.445459   52244 kapi.go:59] client config for test-preload-730152: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 22:00:47.446133   52244 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:00:47.456421   52244 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.186
	I1011 22:00:47.456460   52244 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:00:47.456471   52244 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:00:47.456516   52244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:00:47.496972   52244 cri.go:89] found id: ""
	I1011 22:00:47.497045   52244 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:00:47.514469   52244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:00:47.524477   52244 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:00:47.524503   52244 kubeadm.go:157] found existing configuration files:
	
	I1011 22:00:47.524549   52244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:00:47.534043   52244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:00:47.534103   52244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:00:47.544049   52244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:00:47.553403   52244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:00:47.553471   52244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:00:47.563195   52244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:00:47.572943   52244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:00:47.573004   52244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:00:47.582575   52244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:00:47.591831   52244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:00:47.591883   52244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:00:47.601332   52244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:00:47.610999   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:00:47.705475   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:00:48.894651   52244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.189133947s)
	I1011 22:00:48.894679   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:00:49.168256   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:00:49.240545   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:00:49.316791   52244 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:00:49.316894   52244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:00:49.818028   52244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:00:50.317085   52244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:00:50.344067   52244 api_server.go:72] duration metric: took 1.027275958s to wait for apiserver process to appear ...
	I1011 22:00:50.344092   52244 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:00:50.344114   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:50.344559   52244 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": dial tcp 192.168.39.186:8443: connect: connection refused
	I1011 22:00:50.844183   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:50.844796   52244 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": dial tcp 192.168.39.186:8443: connect: connection refused
	I1011 22:00:51.344386   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:53.916675   52244 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:00:53.916706   52244 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:00:53.916723   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:53.961517   52244 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:00:53.961542   52244 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:00:54.345125   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:54.349966   52244 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:00:54.349994   52244 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:00:54.844296   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:54.849442   52244 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:00:54.849472   52244 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:00:55.345091   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:00:55.353012   52244 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1011 22:00:55.360797   52244 api_server.go:141] control plane version: v1.24.4
	I1011 22:00:55.360831   52244 api_server.go:131] duration metric: took 5.016730476s to wait for apiserver health ...
	I1011 22:00:55.360842   52244 cni.go:84] Creating CNI manager for ""
	I1011 22:00:55.360851   52244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:00:55.362727   52244 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:00:55.364320   52244 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:00:55.389674   52244 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:00:55.468422   52244 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:00:55.468544   52244 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 22:00:55.468571   52244 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 22:00:55.485025   52244 system_pods.go:59] 8 kube-system pods found
	I1011 22:00:55.485065   52244 system_pods.go:61] "coredns-6d4b75cb6d-pjvwn" [36f900fd-3074-4cbe-9ea6-6348dc47cac8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:00:55.485086   52244 system_pods.go:61] "coredns-6d4b75cb6d-vj4wv" [c7bdbe1c-7ccc-4527-bcac-5e4a8354710a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:00:55.485093   52244 system_pods.go:61] "etcd-test-preload-730152" [da54db0a-deca-4c1c-a0f7-8579a77620bf] Running
	I1011 22:00:55.485104   52244 system_pods.go:61] "kube-apiserver-test-preload-730152" [39932432-bbbb-4231-bf6e-5f7eb4c66772] Running
	I1011 22:00:55.485109   52244 system_pods.go:61] "kube-controller-manager-test-preload-730152" [402af7d0-4fec-4b3e-b917-57e35df63b2b] Running
	I1011 22:00:55.485117   52244 system_pods.go:61] "kube-proxy-lhlzw" [0eb01871-7c0f-448e-8f8d-4f8e7fdcb354] Running
	I1011 22:00:55.485122   52244 system_pods.go:61] "kube-scheduler-test-preload-730152" [f3945c7b-c0d6-46d3-80e2-f802f29288a8] Running
	I1011 22:00:55.485131   52244 system_pods.go:61] "storage-provisioner" [1092bdc8-3ba9-465b-9ba8-84c94bbc067b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:00:55.485148   52244 system_pods.go:74] duration metric: took 16.687956ms to wait for pod list to return data ...
	I1011 22:00:55.485163   52244 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:00:55.489228   52244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:00:55.489260   52244 node_conditions.go:123] node cpu capacity is 2
	I1011 22:00:55.489273   52244 node_conditions.go:105] duration metric: took 4.101159ms to run NodePressure ...
	I1011 22:00:55.489291   52244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:00:55.691360   52244 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:00:55.696986   52244 retry.go:31] will retry after 221.292606ms: kubelet not initialised
	I1011 22:00:55.922794   52244 retry.go:31] will retry after 546.191865ms: kubelet not initialised
	I1011 22:00:56.473693   52244 kubeadm.go:739] kubelet initialised
	I1011 22:00:56.473715   52244 kubeadm.go:740] duration metric: took 782.33404ms waiting for restarted kubelet to initialise ...
	I1011 22:00:56.473723   52244 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:00:56.479464   52244 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace to be "Ready" ...
	I1011 22:00:56.485007   52244 pod_ready.go:98] node "test-preload-730152" hosting pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.485044   52244 pod_ready.go:82] duration metric: took 5.553131ms for pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace to be "Ready" ...
	E1011 22:00:56.485056   52244 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-730152" hosting pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.485066   52244 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:00:56.490108   52244 pod_ready.go:98] node "test-preload-730152" hosting pod "etcd-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.490131   52244 pod_ready.go:82] duration metric: took 5.055666ms for pod "etcd-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	E1011 22:00:56.490140   52244 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-730152" hosting pod "etcd-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.490148   52244 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:00:56.496265   52244 pod_ready.go:98] node "test-preload-730152" hosting pod "kube-apiserver-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.496292   52244 pod_ready.go:82] duration metric: took 6.133238ms for pod "kube-apiserver-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	E1011 22:00:56.496303   52244 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-730152" hosting pod "kube-apiserver-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.496311   52244 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:00:56.501439   52244 pod_ready.go:98] node "test-preload-730152" hosting pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.501471   52244 pod_ready.go:82] duration metric: took 5.147439ms for pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	E1011 22:00:56.501483   52244 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-730152" hosting pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.501492   52244 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lhlzw" in "kube-system" namespace to be "Ready" ...
	I1011 22:00:56.873540   52244 pod_ready.go:98] node "test-preload-730152" hosting pod "kube-proxy-lhlzw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.873568   52244 pod_ready.go:82] duration metric: took 372.065338ms for pod "kube-proxy-lhlzw" in "kube-system" namespace to be "Ready" ...
	E1011 22:00:56.873580   52244 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-730152" hosting pod "kube-proxy-lhlzw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:56.873587   52244 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:00:57.273119   52244 pod_ready.go:98] node "test-preload-730152" hosting pod "kube-scheduler-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:57.273140   52244 pod_ready.go:82] duration metric: took 399.547347ms for pod "kube-scheduler-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	E1011 22:00:57.273149   52244 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-730152" hosting pod "kube-scheduler-test-preload-730152" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-730152" has status "Ready":"False"
	I1011 22:00:57.273156   52244 pod_ready.go:39] duration metric: took 799.426125ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:00:57.273173   52244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:00:57.285134   52244 ops.go:34] apiserver oom_adj: -16
	I1011 22:00:57.285153   52244 kubeadm.go:597] duration metric: took 9.85180785s to restartPrimaryControlPlane
	I1011 22:00:57.285164   52244 kubeadm.go:394] duration metric: took 9.908028638s to StartCluster
	I1011 22:00:57.285180   52244 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:00:57.285264   52244 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:00:57.285847   52244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:00:57.286080   52244 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:00:57.286149   52244 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:00:57.286241   52244 addons.go:69] Setting storage-provisioner=true in profile "test-preload-730152"
	I1011 22:00:57.286261   52244 addons.go:234] Setting addon storage-provisioner=true in "test-preload-730152"
	W1011 22:00:57.286270   52244 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:00:57.286285   52244 addons.go:69] Setting default-storageclass=true in profile "test-preload-730152"
	I1011 22:00:57.286301   52244 host.go:66] Checking if "test-preload-730152" exists ...
	I1011 22:00:57.286304   52244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-730152"
	I1011 22:00:57.286288   52244 config.go:182] Loaded profile config "test-preload-730152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1011 22:00:57.286760   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:00:57.286803   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:00:57.286814   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:00:57.286859   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:00:57.287742   52244 out.go:177] * Verifying Kubernetes components...
	I1011 22:00:57.288913   52244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:00:57.302475   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I1011 22:00:57.302891   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:00:57.303383   52244 main.go:141] libmachine: Using API Version  1
	I1011 22:00:57.303405   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:00:57.303722   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:00:57.304211   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:00:57.304250   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:00:57.305206   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I1011 22:00:57.305552   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:00:57.306042   52244 main.go:141] libmachine: Using API Version  1
	I1011 22:00:57.306070   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:00:57.306450   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:00:57.306669   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetState
	I1011 22:00:57.309196   52244 kapi.go:59] client config for test-preload-730152: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/test-preload-730152/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 22:00:57.309546   52244 addons.go:234] Setting addon default-storageclass=true in "test-preload-730152"
	W1011 22:00:57.309563   52244 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:00:57.309591   52244 host.go:66] Checking if "test-preload-730152" exists ...
	I1011 22:00:57.309866   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:00:57.309904   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:00:57.323346   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I1011 22:00:57.323537   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I1011 22:00:57.323724   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:00:57.324042   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:00:57.324165   52244 main.go:141] libmachine: Using API Version  1
	I1011 22:00:57.324180   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:00:57.324492   52244 main.go:141] libmachine: Using API Version  1
	I1011 22:00:57.324508   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:00:57.324523   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:00:57.324845   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:00:57.324992   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetState
	I1011 22:00:57.324999   52244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:00:57.325043   52244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:00:57.326593   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:57.328660   52244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:00:57.329929   52244 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:00:57.329948   52244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:00:57.329967   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:57.333072   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:57.333553   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:57.333588   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:57.333769   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:57.333931   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:57.334068   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:57.334211   52244 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa Username:docker}
	I1011 22:00:57.362392   52244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I1011 22:00:57.362893   52244 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:00:57.363377   52244 main.go:141] libmachine: Using API Version  1
	I1011 22:00:57.363396   52244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:00:57.363731   52244 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:00:57.363932   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetState
	I1011 22:00:57.365381   52244 main.go:141] libmachine: (test-preload-730152) Calling .DriverName
	I1011 22:00:57.365579   52244 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:00:57.365593   52244 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:00:57.365623   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHHostname
	I1011 22:00:57.368495   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:57.368971   52244 main.go:141] libmachine: (test-preload-730152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ce:db", ip: ""} in network mk-test-preload-730152: {Iface:virbr1 ExpiryTime:2024-10-11 23:00:23 +0000 UTC Type:0 Mac:52:54:00:5d:ce:db Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-730152 Clientid:01:52:54:00:5d:ce:db}
	I1011 22:00:57.369001   52244 main.go:141] libmachine: (test-preload-730152) DBG | domain test-preload-730152 has defined IP address 192.168.39.186 and MAC address 52:54:00:5d:ce:db in network mk-test-preload-730152
	I1011 22:00:57.369109   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHPort
	I1011 22:00:57.369279   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHKeyPath
	I1011 22:00:57.369469   52244 main.go:141] libmachine: (test-preload-730152) Calling .GetSSHUsername
	I1011 22:00:57.369694   52244 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/test-preload-730152/id_rsa Username:docker}
	I1011 22:00:57.471453   52244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:00:57.491049   52244 node_ready.go:35] waiting up to 6m0s for node "test-preload-730152" to be "Ready" ...
	I1011 22:00:57.557939   52244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:00:57.576619   52244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:00:58.545515   52244 main.go:141] libmachine: Making call to close driver server
	I1011 22:00:58.545539   52244 main.go:141] libmachine: Making call to close driver server
	I1011 22:00:58.545558   52244 main.go:141] libmachine: (test-preload-730152) Calling .Close
	I1011 22:00:58.545545   52244 main.go:141] libmachine: (test-preload-730152) Calling .Close
	I1011 22:00:58.545870   52244 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:00:58.545885   52244 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:00:58.545894   52244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:00:58.545898   52244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:00:58.545911   52244 main.go:141] libmachine: Making call to close driver server
	I1011 22:00:58.545921   52244 main.go:141] libmachine: (test-preload-730152) Calling .Close
	I1011 22:00:58.545912   52244 main.go:141] libmachine: Making call to close driver server
	I1011 22:00:58.545978   52244 main.go:141] libmachine: (test-preload-730152) Calling .Close
	I1011 22:00:58.546186   52244 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:00:58.546204   52244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:00:58.546205   52244 main.go:141] libmachine: (test-preload-730152) DBG | Closing plugin on server side
	I1011 22:00:58.546266   52244 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:00:58.546275   52244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:00:58.553431   52244 main.go:141] libmachine: Making call to close driver server
	I1011 22:00:58.553445   52244 main.go:141] libmachine: (test-preload-730152) Calling .Close
	I1011 22:00:58.553664   52244 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:00:58.553682   52244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:00:58.553697   52244 main.go:141] libmachine: (test-preload-730152) DBG | Closing plugin on server side
	I1011 22:00:58.555671   52244 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1011 22:00:58.556830   52244 addons.go:510] duration metric: took 1.270696396s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1011 22:00:59.496744   52244 node_ready.go:53] node "test-preload-730152" has status "Ready":"False"
	I1011 22:01:01.994428   52244 node_ready.go:53] node "test-preload-730152" has status "Ready":"False"
	I1011 22:01:03.995860   52244 node_ready.go:53] node "test-preload-730152" has status "Ready":"False"
	I1011 22:01:04.495115   52244 node_ready.go:49] node "test-preload-730152" has status "Ready":"True"
	I1011 22:01:04.495139   52244 node_ready.go:38] duration metric: took 7.00405813s for node "test-preload-730152" to be "Ready" ...
	I1011 22:01:04.495149   52244 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:01:04.501598   52244 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:04.506656   52244 pod_ready.go:93] pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace has status "Ready":"True"
	I1011 22:01:04.506675   52244 pod_ready.go:82] duration metric: took 5.053129ms for pod "coredns-6d4b75cb6d-pjvwn" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:04.506683   52244 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:06.514449   52244 pod_ready.go:103] pod "etcd-test-preload-730152" in "kube-system" namespace has status "Ready":"False"
	I1011 22:01:07.013435   52244 pod_ready.go:93] pod "etcd-test-preload-730152" in "kube-system" namespace has status "Ready":"True"
	I1011 22:01:07.013458   52244 pod_ready.go:82] duration metric: took 2.506769291s for pod "etcd-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.013468   52244 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.018024   52244 pod_ready.go:93] pod "kube-apiserver-test-preload-730152" in "kube-system" namespace has status "Ready":"True"
	I1011 22:01:07.018046   52244 pod_ready.go:82] duration metric: took 4.572211ms for pod "kube-apiserver-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.018055   52244 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.022276   52244 pod_ready.go:93] pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace has status "Ready":"True"
	I1011 22:01:07.022294   52244 pod_ready.go:82] duration metric: took 4.232542ms for pod "kube-controller-manager-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.022303   52244 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lhlzw" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.026418   52244 pod_ready.go:93] pod "kube-proxy-lhlzw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:01:07.026441   52244 pod_ready.go:82] duration metric: took 4.126799ms for pod "kube-proxy-lhlzw" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.026449   52244 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.296100   52244 pod_ready.go:93] pod "kube-scheduler-test-preload-730152" in "kube-system" namespace has status "Ready":"True"
	I1011 22:01:07.296122   52244 pod_ready.go:82] duration metric: took 269.667666ms for pod "kube-scheduler-test-preload-730152" in "kube-system" namespace to be "Ready" ...
	I1011 22:01:07.296133   52244 pod_ready.go:39] duration metric: took 2.80097516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:01:07.296147   52244 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:01:07.296199   52244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:01:07.310462   52244 api_server.go:72] duration metric: took 10.024349919s to wait for apiserver process to appear ...
	I1011 22:01:07.310493   52244 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:01:07.310521   52244 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1011 22:01:07.318600   52244 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1011 22:01:07.320180   52244 api_server.go:141] control plane version: v1.24.4
	I1011 22:01:07.320201   52244 api_server.go:131] duration metric: took 9.702115ms to wait for apiserver health ...
	I1011 22:01:07.320209   52244 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:01:07.504701   52244 system_pods.go:59] 7 kube-system pods found
	I1011 22:01:07.504733   52244 system_pods.go:61] "coredns-6d4b75cb6d-pjvwn" [36f900fd-3074-4cbe-9ea6-6348dc47cac8] Running
	I1011 22:01:07.504738   52244 system_pods.go:61] "etcd-test-preload-730152" [da54db0a-deca-4c1c-a0f7-8579a77620bf] Running
	I1011 22:01:07.504742   52244 system_pods.go:61] "kube-apiserver-test-preload-730152" [39932432-bbbb-4231-bf6e-5f7eb4c66772] Running
	I1011 22:01:07.504748   52244 system_pods.go:61] "kube-controller-manager-test-preload-730152" [402af7d0-4fec-4b3e-b917-57e35df63b2b] Running
	I1011 22:01:07.504753   52244 system_pods.go:61] "kube-proxy-lhlzw" [0eb01871-7c0f-448e-8f8d-4f8e7fdcb354] Running
	I1011 22:01:07.504758   52244 system_pods.go:61] "kube-scheduler-test-preload-730152" [f3945c7b-c0d6-46d3-80e2-f802f29288a8] Running
	I1011 22:01:07.504763   52244 system_pods.go:61] "storage-provisioner" [1092bdc8-3ba9-465b-9ba8-84c94bbc067b] Running
	I1011 22:01:07.504771   52244 system_pods.go:74] duration metric: took 184.555934ms to wait for pod list to return data ...
	I1011 22:01:07.504779   52244 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:01:07.694708   52244 default_sa.go:45] found service account: "default"
	I1011 22:01:07.694738   52244 default_sa.go:55] duration metric: took 189.951177ms for default service account to be created ...
	I1011 22:01:07.694749   52244 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:01:07.898682   52244 system_pods.go:86] 7 kube-system pods found
	I1011 22:01:07.898713   52244 system_pods.go:89] "coredns-6d4b75cb6d-pjvwn" [36f900fd-3074-4cbe-9ea6-6348dc47cac8] Running
	I1011 22:01:07.898721   52244 system_pods.go:89] "etcd-test-preload-730152" [da54db0a-deca-4c1c-a0f7-8579a77620bf] Running
	I1011 22:01:07.898728   52244 system_pods.go:89] "kube-apiserver-test-preload-730152" [39932432-bbbb-4231-bf6e-5f7eb4c66772] Running
	I1011 22:01:07.898734   52244 system_pods.go:89] "kube-controller-manager-test-preload-730152" [402af7d0-4fec-4b3e-b917-57e35df63b2b] Running
	I1011 22:01:07.898740   52244 system_pods.go:89] "kube-proxy-lhlzw" [0eb01871-7c0f-448e-8f8d-4f8e7fdcb354] Running
	I1011 22:01:07.898745   52244 system_pods.go:89] "kube-scheduler-test-preload-730152" [f3945c7b-c0d6-46d3-80e2-f802f29288a8] Running
	I1011 22:01:07.898750   52244 system_pods.go:89] "storage-provisioner" [1092bdc8-3ba9-465b-9ba8-84c94bbc067b] Running
	I1011 22:01:07.898758   52244 system_pods.go:126] duration metric: took 204.002565ms to wait for k8s-apps to be running ...
	I1011 22:01:07.898771   52244 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:01:07.898824   52244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:01:07.913640   52244 system_svc.go:56] duration metric: took 14.86222ms WaitForService to wait for kubelet
	I1011 22:01:07.913668   52244 kubeadm.go:582] duration metric: took 10.627560776s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:01:07.913683   52244 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:01:08.095479   52244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:01:08.095518   52244 node_conditions.go:123] node cpu capacity is 2
	I1011 22:01:08.095529   52244 node_conditions.go:105] duration metric: took 181.841009ms to run NodePressure ...
	I1011 22:01:08.095542   52244 start.go:241] waiting for startup goroutines ...
	I1011 22:01:08.095558   52244 start.go:246] waiting for cluster config update ...
	I1011 22:01:08.095572   52244 start.go:255] writing updated cluster config ...
	I1011 22:01:08.095880   52244 ssh_runner.go:195] Run: rm -f paused
	I1011 22:01:08.143342   52244 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I1011 22:01:08.145262   52244 out.go:201] 
	W1011 22:01:08.146577   52244 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I1011 22:01:08.147797   52244 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1011 22:01:08.148927   52244 out.go:177] * Done! kubectl is now configured to use "test-preload-730152" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.001006765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684069000983553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4338eb1d-2094-42c6-b906-3c569ba13484 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.001564210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01ba3ba7-ac0e-4053-b1c9-503eed111861 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.001638074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01ba3ba7-ac0e-4053-b1c9-503eed111861 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.002009109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63ab581cf7e4f5d8efbd30d34a630deda8ec3d8f5df1969f05f1be8ea21ebd0c,PodSandboxId:39ebec44b4aec93277ae28f02e7d753dae92c7ad8d4aa6905adae3c5507d561c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728684062388756692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pjvwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f900fd-3074-4cbe-9ea6-6348dc47cac8,},Annotations:map[string]string{io.kubernetes.container.hash: 57a021c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf21ad8bf5931332c0eda3155169fe87e8c45d5e21a153a00f7dd338656cc06,PodSandboxId:a45ce80a806f3f36f6bf8e3710209f7b62f6b09ae038d3bfc12aecc10afdeb3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684055307587405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1092bdc8-3ba9-465b-9ba8-84c94bbc067b,},Annotations:map[string]string{io.kubernetes.container.hash: df9da6f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf626ba890b7345f9a172ef90248313ad259ca3efa410ed6d4e0a2ce9e83bf51,PodSandboxId:04bfe84e6d7bc760612c6b6260ff2322a0e0d1c977bcf9f0182a0e8f72c395ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728684055053532641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhlzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e
b01871-7c0f-448e-8f8d-4f8e7fdcb354,},Annotations:map[string]string{io.kubernetes.container.hash: 83713d10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60931ffe873791c13850b25f886fad56d2c6aa6fe21ad0e4b7bbd11999868aca,PodSandboxId:eb542ec171522dd67198bb053a298845d312367b25d38127a34e0aa91c245c14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728684050089405618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130496e4
3b8a20ad5acef075ab32e15,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268dd4a9f7692af6cf89e1fd8392a6817884b9119472242ff20a3a90ec3d02a,PodSandboxId:5d9882985432b9090eeeb1a4f88d04662f46c186fc39f0d4d9a7c50841dfef0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728684050025288096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b985751a97b36d607c54d94e7e5691,},Annotations:map
[string]string{io.kubernetes.container.hash: c585fd79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a597b9abfef883c7f337995b41eae20a0c091f0838cd6666d272002bb0ca404b,PodSandboxId:9946da7e7d3523bd0d7ac86031497fcf457f4b71f15ade4f0a78f6ac9b594d27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728684050007338999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbde063a0ea13f9be236a22a29e6d79c,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72981e26aff5e9209b90753a8984930d2bb39a99f929ff6b2ebb88e9a1522cf,PodSandboxId:5d42eb7270cb95547bced077afee5567feaf9d48059b34bf5e1f60a63e88a21f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728684049979208055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f506014bb0736837a7b649c3a98485b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01ba3ba7-ac0e-4053-b1c9-503eed111861 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.038815972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=787f3d01-a616-447f-bca8-eb9848064abe name=/runtime.v1.RuntimeService/Version
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.038950854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=787f3d01-a616-447f-bca8-eb9848064abe name=/runtime.v1.RuntimeService/Version
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.040560074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a39b4a4-d5ff-4686-8148-b35015719e26 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.041046531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684069041021862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a39b4a4-d5ff-4686-8148-b35015719e26 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.041629900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d98b6022-d6a6-4db8-a191-c1d86f8cfb0c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.041709167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d98b6022-d6a6-4db8-a191-c1d86f8cfb0c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.041917599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63ab581cf7e4f5d8efbd30d34a630deda8ec3d8f5df1969f05f1be8ea21ebd0c,PodSandboxId:39ebec44b4aec93277ae28f02e7d753dae92c7ad8d4aa6905adae3c5507d561c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728684062388756692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pjvwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f900fd-3074-4cbe-9ea6-6348dc47cac8,},Annotations:map[string]string{io.kubernetes.container.hash: 57a021c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf21ad8bf5931332c0eda3155169fe87e8c45d5e21a153a00f7dd338656cc06,PodSandboxId:a45ce80a806f3f36f6bf8e3710209f7b62f6b09ae038d3bfc12aecc10afdeb3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684055307587405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1092bdc8-3ba9-465b-9ba8-84c94bbc067b,},Annotations:map[string]string{io.kubernetes.container.hash: df9da6f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf626ba890b7345f9a172ef90248313ad259ca3efa410ed6d4e0a2ce9e83bf51,PodSandboxId:04bfe84e6d7bc760612c6b6260ff2322a0e0d1c977bcf9f0182a0e8f72c395ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728684055053532641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhlzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e
b01871-7c0f-448e-8f8d-4f8e7fdcb354,},Annotations:map[string]string{io.kubernetes.container.hash: 83713d10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60931ffe873791c13850b25f886fad56d2c6aa6fe21ad0e4b7bbd11999868aca,PodSandboxId:eb542ec171522dd67198bb053a298845d312367b25d38127a34e0aa91c245c14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728684050089405618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130496e4
3b8a20ad5acef075ab32e15,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268dd4a9f7692af6cf89e1fd8392a6817884b9119472242ff20a3a90ec3d02a,PodSandboxId:5d9882985432b9090eeeb1a4f88d04662f46c186fc39f0d4d9a7c50841dfef0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728684050025288096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b985751a97b36d607c54d94e7e5691,},Annotations:map
[string]string{io.kubernetes.container.hash: c585fd79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a597b9abfef883c7f337995b41eae20a0c091f0838cd6666d272002bb0ca404b,PodSandboxId:9946da7e7d3523bd0d7ac86031497fcf457f4b71f15ade4f0a78f6ac9b594d27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728684050007338999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbde063a0ea13f9be236a22a29e6d79c,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72981e26aff5e9209b90753a8984930d2bb39a99f929ff6b2ebb88e9a1522cf,PodSandboxId:5d42eb7270cb95547bced077afee5567feaf9d48059b34bf5e1f60a63e88a21f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728684049979208055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f506014bb0736837a7b649c3a98485b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d98b6022-d6a6-4db8-a191-c1d86f8cfb0c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.083554524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1ec7861-0aee-4b63-8470-d5ec1fab7a21 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.083624736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1ec7861-0aee-4b63-8470-d5ec1fab7a21 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.084461088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebf6c363-525d-4add-83fd-01051bdf672b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.085157553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684069085130414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebf6c363-525d-4add-83fd-01051bdf672b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.085889924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24068989-36e2-4df7-be2e-cd1d9a851e0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.085964108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24068989-36e2-4df7-be2e-cd1d9a851e0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.086117391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63ab581cf7e4f5d8efbd30d34a630deda8ec3d8f5df1969f05f1be8ea21ebd0c,PodSandboxId:39ebec44b4aec93277ae28f02e7d753dae92c7ad8d4aa6905adae3c5507d561c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728684062388756692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pjvwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f900fd-3074-4cbe-9ea6-6348dc47cac8,},Annotations:map[string]string{io.kubernetes.container.hash: 57a021c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf21ad8bf5931332c0eda3155169fe87e8c45d5e21a153a00f7dd338656cc06,PodSandboxId:a45ce80a806f3f36f6bf8e3710209f7b62f6b09ae038d3bfc12aecc10afdeb3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684055307587405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1092bdc8-3ba9-465b-9ba8-84c94bbc067b,},Annotations:map[string]string{io.kubernetes.container.hash: df9da6f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf626ba890b7345f9a172ef90248313ad259ca3efa410ed6d4e0a2ce9e83bf51,PodSandboxId:04bfe84e6d7bc760612c6b6260ff2322a0e0d1c977bcf9f0182a0e8f72c395ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728684055053532641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhlzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e
b01871-7c0f-448e-8f8d-4f8e7fdcb354,},Annotations:map[string]string{io.kubernetes.container.hash: 83713d10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60931ffe873791c13850b25f886fad56d2c6aa6fe21ad0e4b7bbd11999868aca,PodSandboxId:eb542ec171522dd67198bb053a298845d312367b25d38127a34e0aa91c245c14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728684050089405618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130496e4
3b8a20ad5acef075ab32e15,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268dd4a9f7692af6cf89e1fd8392a6817884b9119472242ff20a3a90ec3d02a,PodSandboxId:5d9882985432b9090eeeb1a4f88d04662f46c186fc39f0d4d9a7c50841dfef0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728684050025288096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b985751a97b36d607c54d94e7e5691,},Annotations:map
[string]string{io.kubernetes.container.hash: c585fd79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a597b9abfef883c7f337995b41eae20a0c091f0838cd6666d272002bb0ca404b,PodSandboxId:9946da7e7d3523bd0d7ac86031497fcf457f4b71f15ade4f0a78f6ac9b594d27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728684050007338999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbde063a0ea13f9be236a22a29e6d79c,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72981e26aff5e9209b90753a8984930d2bb39a99f929ff6b2ebb88e9a1522cf,PodSandboxId:5d42eb7270cb95547bced077afee5567feaf9d48059b34bf5e1f60a63e88a21f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728684049979208055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f506014bb0736837a7b649c3a98485b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24068989-36e2-4df7-be2e-cd1d9a851e0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.119671417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8485441-4c96-41ec-9c7c-e84f928871e6 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.119772366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8485441-4c96-41ec-9c7c-e84f928871e6 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.120795179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76a830d0-3f45-4539-97b0-00ef05d291c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.121335636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684069121311010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76a830d0-3f45-4539-97b0-00ef05d291c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.121835622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce08e949-589c-4b33-b379-f482808f5b9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.121965969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce08e949-589c-4b33-b379-f482808f5b9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:01:09 test-preload-730152 crio[684]: time="2024-10-11 22:01:09.122123452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63ab581cf7e4f5d8efbd30d34a630deda8ec3d8f5df1969f05f1be8ea21ebd0c,PodSandboxId:39ebec44b4aec93277ae28f02e7d753dae92c7ad8d4aa6905adae3c5507d561c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728684062388756692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-pjvwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f900fd-3074-4cbe-9ea6-6348dc47cac8,},Annotations:map[string]string{io.kubernetes.container.hash: 57a021c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf21ad8bf5931332c0eda3155169fe87e8c45d5e21a153a00f7dd338656cc06,PodSandboxId:a45ce80a806f3f36f6bf8e3710209f7b62f6b09ae038d3bfc12aecc10afdeb3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684055307587405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1092bdc8-3ba9-465b-9ba8-84c94bbc067b,},Annotations:map[string]string{io.kubernetes.container.hash: df9da6f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf626ba890b7345f9a172ef90248313ad259ca3efa410ed6d4e0a2ce9e83bf51,PodSandboxId:04bfe84e6d7bc760612c6b6260ff2322a0e0d1c977bcf9f0182a0e8f72c395ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728684055053532641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhlzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e
b01871-7c0f-448e-8f8d-4f8e7fdcb354,},Annotations:map[string]string{io.kubernetes.container.hash: 83713d10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60931ffe873791c13850b25f886fad56d2c6aa6fe21ad0e4b7bbd11999868aca,PodSandboxId:eb542ec171522dd67198bb053a298845d312367b25d38127a34e0aa91c245c14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728684050089405618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130496e4
3b8a20ad5acef075ab32e15,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268dd4a9f7692af6cf89e1fd8392a6817884b9119472242ff20a3a90ec3d02a,PodSandboxId:5d9882985432b9090eeeb1a4f88d04662f46c186fc39f0d4d9a7c50841dfef0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728684050025288096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b985751a97b36d607c54d94e7e5691,},Annotations:map
[string]string{io.kubernetes.container.hash: c585fd79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a597b9abfef883c7f337995b41eae20a0c091f0838cd6666d272002bb0ca404b,PodSandboxId:9946da7e7d3523bd0d7ac86031497fcf457f4b71f15ade4f0a78f6ac9b594d27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728684050007338999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbde063a0ea13f9be236a22a29e6d79c,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72981e26aff5e9209b90753a8984930d2bb39a99f929ff6b2ebb88e9a1522cf,PodSandboxId:5d42eb7270cb95547bced077afee5567feaf9d48059b34bf5e1f60a63e88a21f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728684049979208055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-730152,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f506014bb0736837a7b649c3a98485b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce08e949-589c-4b33-b379-f482808f5b9a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	63ab581cf7e4f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   39ebec44b4aec       coredns-6d4b75cb6d-pjvwn
	4bf21ad8bf593       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   a45ce80a806f3       storage-provisioner
	bf626ba890b73       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   04bfe84e6d7bc       kube-proxy-lhlzw
	60931ffe87379       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   eb542ec171522       kube-scheduler-test-preload-730152
	8268dd4a9f769       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   5d9882985432b       etcd-test-preload-730152
	a597b9abfef88       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   9946da7e7d352       kube-controller-manager-test-preload-730152
	d72981e26aff5       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   5d42eb7270cb9       kube-apiserver-test-preload-730152
	
	
	==> coredns [63ab581cf7e4f5d8efbd30d34a630deda8ec3d8f5df1969f05f1be8ea21ebd0c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:52117 - 61489 "HINFO IN 13549798355782038.3190985872323603531. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010734471s
	
	
	==> describe nodes <==
	Name:               test-preload-730152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-730152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=test-preload-730152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T21_59_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-730152
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:01:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:01:04 +0000   Fri, 11 Oct 2024 21:59:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:01:04 +0000   Fri, 11 Oct 2024 21:59:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:01:04 +0000   Fri, 11 Oct 2024 21:59:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:01:04 +0000   Fri, 11 Oct 2024 22:01:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    test-preload-730152
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa44ccb313194ee69fc3fe52e15a4877
	  System UUID:                fa44ccb3-1319-4ee6-9fc3-fe52e15a4877
	  Boot ID:                    1e6029b5-1309-4792-8cd5-83140ffa51b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pjvwn                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     89s
	  kube-system                 etcd-test-preload-730152                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         101s
	  kube-system                 kube-apiserver-test-preload-730152             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-730152    200m (10%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-lhlzw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-test-preload-730152             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 102s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  102s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  102s               kubelet          Node test-preload-730152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s               kubelet          Node test-preload-730152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s               kubelet          Node test-preload-730152 status is now: NodeHasSufficientPID
	  Normal  NodeReady                91s                kubelet          Node test-preload-730152 status is now: NodeReady
	  Normal  RegisteredNode           90s                node-controller  Node test-preload-730152 event: Registered Node test-preload-730152 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-730152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-730152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-730152 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-730152 event: Registered Node test-preload-730152 in Controller
	
	
	==> dmesg <==
	[Oct11 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051298] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040860] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858796] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.563713] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.646654] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.697560] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.055869] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062069] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.187620] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.118931] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.271020] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[ +13.038064] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +0.060340] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.229475] systemd-fstab-generator[1139]: Ignoring "noauto" option for root device
	[  +4.635593] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.622211] systemd-fstab-generator[1778]: Ignoring "noauto" option for root device
	[Oct11 22:01] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.028116] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [8268dd4a9f7692af6cf89e1fd8392a6817884b9119472242ff20a3a90ec3d02a] <==
	{"level":"info","ts":"2024-10-11T22:00:50.427Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1bfd5d64eb00b2d5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-11T22:00:50.429Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-11T22:00:50.430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 switched to configuration voters=(2016870896152654549)"}
	{"level":"info","ts":"2024-10-11T22:00:50.430Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","added-peer-id":"1bfd5d64eb00b2d5","added-peer-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-10-11T22:00:50.430Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:00:50.430Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:00:50.436Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T22:00:50.436Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1bfd5d64eb00b2d5","initial-advertise-peer-urls":["https://192.168.39.186:2380"],"listen-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:00:50.436Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:00:50.439Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-10-11T22:00:50.439Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-10-11T22:00:51.405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-11T22:00:51.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:00:51.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgPreVoteResp from 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2024-10-11T22:00:51.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became candidate at term 3"}
	{"level":"info","ts":"2024-10-11T22:00:51.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-10-11T22:00:51.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 3"}
	{"level":"info","ts":"2024-10-11T22:00:51.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-10-11T22:00:51.410Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:test-preload-730152 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:00:51.410Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:00:51.411Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:00:51.412Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:00:51.413Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.186:2379"}
	{"level":"info","ts":"2024-10-11T22:00:51.415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:00:51.415Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:01:09 up 0 min,  0 users,  load average: 1.48, 0.40, 0.14
	Linux test-preload-730152 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d72981e26aff5e9209b90753a8984930d2bb39a99f929ff6b2ebb88e9a1522cf] <==
	I1011 22:00:53.836223       1 naming_controller.go:291] Starting NamingConditionController
	I1011 22:00:53.836282       1 establishing_controller.go:76] Starting EstablishingController
	I1011 22:00:53.836308       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1011 22:00:53.836320       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1011 22:00:53.836332       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1011 22:00:53.836357       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1011 22:00:53.923426       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1011 22:00:53.923451       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1011 22:00:53.964706       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1011 22:00:53.977134       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1011 22:00:53.988161       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1011 22:00:53.988772       1 cache.go:39] Caches are synced for autoregister controller
	I1011 22:00:53.997074       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1011 22:00:54.005589       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1011 22:00:54.016826       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1011 22:00:54.470219       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1011 22:00:54.823797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1011 22:00:55.410242       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1011 22:00:55.573484       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1011 22:00:55.585401       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1011 22:00:55.629366       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1011 22:00:55.644327       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 22:00:55.650412       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1011 22:01:06.933395       1 controller.go:611] quota admission added evaluator for: endpoints
	I1011 22:01:07.183187       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a597b9abfef883c7f337995b41eae20a0c091f0838cd6666d272002bb0ca404b] <==
	I1011 22:01:07.068230       1 shared_informer.go:262] Caches are synced for resource quota
	I1011 22:01:07.073959       1 shared_informer.go:262] Caches are synced for deployment
	W1011 22:01:07.112202       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-730152" does not exist
	I1011 22:01:07.126740       1 shared_informer.go:262] Caches are synced for resource quota
	I1011 22:01:07.141990       1 shared_informer.go:262] Caches are synced for disruption
	I1011 22:01:07.142149       1 disruption.go:371] Sending events to api server.
	I1011 22:01:07.146222       1 shared_informer.go:262] Caches are synced for node
	I1011 22:01:07.146322       1 range_allocator.go:173] Starting range CIDR allocator
	I1011 22:01:07.146344       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1011 22:01:07.146422       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1011 22:01:07.148584       1 shared_informer.go:262] Caches are synced for TTL
	I1011 22:01:07.149960       1 shared_informer.go:262] Caches are synced for taint
	I1011 22:01:07.150107       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1011 22:01:07.150286       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-730152. Assuming now as a timestamp.
	I1011 22:01:07.150330       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1011 22:01:07.150411       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1011 22:01:07.151009       1 event.go:294] "Event occurred" object="test-preload-730152" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-730152 event: Registered Node test-preload-730152 in Controller"
	I1011 22:01:07.153926       1 shared_informer.go:262] Caches are synced for GC
	I1011 22:01:07.168080       1 shared_informer.go:262] Caches are synced for persistent volume
	I1011 22:01:07.168296       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1011 22:01:07.168796       1 shared_informer.go:262] Caches are synced for attach detach
	I1011 22:01:07.172154       1 shared_informer.go:262] Caches are synced for daemon sets
	I1011 22:01:07.573398       1 shared_informer.go:262] Caches are synced for garbage collector
	I1011 22:01:07.573443       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1011 22:01:07.612372       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [bf626ba890b7345f9a172ef90248313ad259ca3efa410ed6d4e0a2ce9e83bf51] <==
	I1011 22:00:55.330287       1 node.go:163] Successfully retrieved node IP: 192.168.39.186
	I1011 22:00:55.330438       1 server_others.go:138] "Detected node IP" address="192.168.39.186"
	I1011 22:00:55.330500       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1011 22:00:55.389001       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1011 22:00:55.389019       1 server_others.go:206] "Using iptables Proxier"
	I1011 22:00:55.389070       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1011 22:00:55.389906       1 server.go:661] "Version info" version="v1.24.4"
	I1011 22:00:55.389921       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:00:55.391605       1 config.go:317] "Starting service config controller"
	I1011 22:00:55.392322       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1011 22:00:55.392368       1 config.go:226] "Starting endpoint slice config controller"
	I1011 22:00:55.392376       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1011 22:00:55.397435       1 config.go:444] "Starting node config controller"
	I1011 22:00:55.397446       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1011 22:00:55.492609       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1011 22:00:55.492724       1 shared_informer.go:262] Caches are synced for service config
	I1011 22:00:55.497605       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [60931ffe873791c13850b25f886fad56d2c6aa6fe21ad0e4b7bbd11999868aca] <==
	I1011 22:00:51.073459       1 serving.go:348] Generated self-signed cert in-memory
	W1011 22:00:53.866275       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 22:00:53.866445       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 22:00:53.866461       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 22:00:53.866474       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 22:00:53.961385       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1011 22:00:53.961421       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:00:53.971249       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1011 22:00:53.971503       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 22:00:53.971553       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 22:00:53.971583       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1011 22:00:54.071617       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.317240    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nb28\" (UniqueName: \"kubernetes.io/projected/1092bdc8-3ba9-465b-9ba8-84c94bbc067b-kube-api-access-9nb28\") pod \"storage-provisioner\" (UID: \"1092bdc8-3ba9-465b-9ba8-84c94bbc067b\") " pod="kube-system/storage-provisioner"
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.317265    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6n9q\" (UniqueName: \"kubernetes.io/projected/0eb01871-7c0f-448e-8f8d-4f8e7fdcb354-kube-api-access-x6n9q\") pod \"kube-proxy-lhlzw\" (UID: \"0eb01871-7c0f-448e-8f8d-4f8e7fdcb354\") " pod="kube-system/kube-proxy-lhlzw"
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.317295    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9v76\" (UniqueName: \"kubernetes.io/projected/36f900fd-3074-4cbe-9ea6-6348dc47cac8-kube-api-access-q9v76\") pod \"coredns-6d4b75cb6d-pjvwn\" (UID: \"36f900fd-3074-4cbe-9ea6-6348dc47cac8\") " pod="kube-system/coredns-6d4b75cb6d-pjvwn"
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.317312    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0eb01871-7c0f-448e-8f8d-4f8e7fdcb354-xtables-lock\") pod \"kube-proxy-lhlzw\" (UID: \"0eb01871-7c0f-448e-8f8d-4f8e7fdcb354\") " pod="kube-system/kube-proxy-lhlzw"
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.317322    1146 reconciler.go:159] "Reconciler: start to sync state"
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: E1011 22:00:54.321839    1146 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.425768    1146 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a-config-volume\") pod \"c7bdbe1c-7ccc-4527-bcac-5e4a8354710a\" (UID: \"c7bdbe1c-7ccc-4527-bcac-5e4a8354710a\") "
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.425983    1146 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rr6\" (UniqueName: \"kubernetes.io/projected/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a-kube-api-access-x6rr6\") pod \"c7bdbe1c-7ccc-4527-bcac-5e4a8354710a\" (UID: \"c7bdbe1c-7ccc-4527-bcac-5e4a8354710a\") "
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: E1011 22:00:54.426488    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: E1011 22:00:54.426623    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume podName:36f900fd-3074-4cbe-9ea6-6348dc47cac8 nodeName:}" failed. No retries permitted until 2024-10-11 22:00:54.926586502 +0000 UTC m=+5.790171982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume") pod "coredns-6d4b75cb6d-pjvwn" (UID: "36f900fd-3074-4cbe-9ea6-6348dc47cac8") : object "kube-system"/"coredns" not registered
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: W1011 22:00:54.427503    1146 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.428214    1146 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7bdbe1c-7ccc-4527-bcac-5e4a8354710a" (UID: "c7bdbe1c-7ccc-4527-bcac-5e4a8354710a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: W1011 22:00:54.428313    1146 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a/volumes/kubernetes.io~projected/kube-api-access-x6rr6: clearQuota called, but quotas disabled
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.428479    1146 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a-kube-api-access-x6rr6" (OuterVolumeSpecName: "kube-api-access-x6rr6") pod "c7bdbe1c-7ccc-4527-bcac-5e4a8354710a" (UID: "c7bdbe1c-7ccc-4527-bcac-5e4a8354710a"). InnerVolumeSpecName "kube-api-access-x6rr6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.527037    1146 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a-config-volume\") on node \"test-preload-730152\" DevicePath \"\""
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: I1011 22:00:54.527091    1146 reconciler.go:384] "Volume detached for volume \"kube-api-access-x6rr6\" (UniqueName: \"kubernetes.io/projected/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a-kube-api-access-x6rr6\") on node \"test-preload-730152\" DevicePath \"\""
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: E1011 22:00:54.930470    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 11 22:00:54 test-preload-730152 kubelet[1146]: E1011 22:00:54.930545    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume podName:36f900fd-3074-4cbe-9ea6-6348dc47cac8 nodeName:}" failed. No retries permitted until 2024-10-11 22:00:55.930528924 +0000 UTC m=+6.794114416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume") pod "coredns-6d4b75cb6d-pjvwn" (UID: "36f900fd-3074-4cbe-9ea6-6348dc47cac8") : object "kube-system"/"coredns" not registered
	Oct 11 22:00:55 test-preload-730152 kubelet[1146]: E1011 22:00:55.939056    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 11 22:00:55 test-preload-730152 kubelet[1146]: E1011 22:00:55.939147    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume podName:36f900fd-3074-4cbe-9ea6-6348dc47cac8 nodeName:}" failed. No retries permitted until 2024-10-11 22:00:57.939131141 +0000 UTC m=+8.802716633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume") pod "coredns-6d4b75cb6d-pjvwn" (UID: "36f900fd-3074-4cbe-9ea6-6348dc47cac8") : object "kube-system"/"coredns" not registered
	Oct 11 22:00:56 test-preload-730152 kubelet[1146]: E1011 22:00:56.367829    1146 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-pjvwn" podUID=36f900fd-3074-4cbe-9ea6-6348dc47cac8
	Oct 11 22:00:57 test-preload-730152 kubelet[1146]: I1011 22:00:57.376263    1146 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c7bdbe1c-7ccc-4527-bcac-5e4a8354710a path="/var/lib/kubelet/pods/c7bdbe1c-7ccc-4527-bcac-5e4a8354710a/volumes"
	Oct 11 22:00:57 test-preload-730152 kubelet[1146]: E1011 22:00:57.953908    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 11 22:00:57 test-preload-730152 kubelet[1146]: E1011 22:00:57.954020    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume podName:36f900fd-3074-4cbe-9ea6-6348dc47cac8 nodeName:}" failed. No retries permitted until 2024-10-11 22:01:01.953991776 +0000 UTC m=+12.817577271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/36f900fd-3074-4cbe-9ea6-6348dc47cac8-config-volume") pod "coredns-6d4b75cb6d-pjvwn" (UID: "36f900fd-3074-4cbe-9ea6-6348dc47cac8") : object "kube-system"/"coredns" not registered
	Oct 11 22:00:58 test-preload-730152 kubelet[1146]: E1011 22:00:58.367921    1146 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-pjvwn" podUID=36f900fd-3074-4cbe-9ea6-6348dc47cac8
	
	
	==> storage-provisioner [4bf21ad8bf5931332c0eda3155169fe87e8c45d5e21a153a00f7dd338656cc06] <==
	I1011 22:00:55.459402       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-730152 -n test-preload-730152
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-730152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-730152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-730152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-730152: (1.124497811s)
--- FAIL: TestPreload (176.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (523.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m37.914299237s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-370171] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-370171" primary control-plane node in "kubernetes-upgrade-370171" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:03:12.932149   56166 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:03:12.932286   56166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:03:12.932295   56166 out.go:358] Setting ErrFile to fd 2...
	I1011 22:03:12.932298   56166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:03:12.932488   56166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:03:12.933006   56166 out.go:352] Setting JSON to false
	I1011 22:03:12.933843   56166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6338,"bootTime":1728677855,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:03:12.933923   56166 start.go:139] virtualization: kvm guest
	I1011 22:03:12.935938   56166 out.go:177] * [kubernetes-upgrade-370171] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:03:12.937173   56166 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:03:12.937185   56166 notify.go:220] Checking for updates...
	I1011 22:03:12.939348   56166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:03:12.940520   56166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:03:12.941713   56166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:03:12.942851   56166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:03:12.943946   56166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:03:12.945545   56166 config.go:182] Loaded profile config "NoKubernetes-320768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:03:12.945691   56166 config.go:182] Loaded profile config "force-systemd-env-326657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:03:12.945833   56166 config.go:182] Loaded profile config "offline-crio-313531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:03:12.945932   56166 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:03:12.978825   56166 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 22:03:12.980049   56166 start.go:297] selected driver: kvm2
	I1011 22:03:12.980077   56166 start.go:901] validating driver "kvm2" against <nil>
	I1011 22:03:12.980093   56166 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:03:12.980858   56166 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:03:12.980943   56166 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:03:12.995641   56166 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:03:12.995691   56166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 22:03:12.995909   56166 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 22:03:12.995938   56166 cni.go:84] Creating CNI manager for ""
	I1011 22:03:12.995976   56166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:03:12.995986   56166 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 22:03:12.996038   56166 start.go:340] cluster config:
	{Name:kubernetes-upgrade-370171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-370171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:03:12.996147   56166 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:03:12.997969   56166 out.go:177] * Starting "kubernetes-upgrade-370171" primary control-plane node in "kubernetes-upgrade-370171" cluster
	I1011 22:03:12.999221   56166 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:03:12.999259   56166 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:03:12.999269   56166 cache.go:56] Caching tarball of preloaded images
	I1011 22:03:12.999343   56166 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:03:12.999352   56166 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:03:12.999464   56166 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/config.json ...
	I1011 22:03:12.999481   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/config.json: {Name:mk969f39a6c8dfe7253acd96db858d9c140becdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:03:12.999616   56166 start.go:360] acquireMachinesLock for kubernetes-upgrade-370171: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:04:20.499573   56166 start.go:364] duration metric: took 1m7.499918584s to acquireMachinesLock for "kubernetes-upgrade-370171"
	I1011 22:04:20.499641   56166 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-370171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-370171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:04:20.499739   56166 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 22:04:20.501505   56166 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 22:04:20.501697   56166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:04:20.501751   56166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:04:20.521594   56166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I1011 22:04:20.521985   56166 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:04:20.522564   56166 main.go:141] libmachine: Using API Version  1
	I1011 22:04:20.522581   56166 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:04:20.523055   56166 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:04:20.523271   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetMachineName
	I1011 22:04:20.523418   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:20.523633   56166 start.go:159] libmachine.API.Create for "kubernetes-upgrade-370171" (driver="kvm2")
	I1011 22:04:20.523665   56166 client.go:168] LocalClient.Create starting
	I1011 22:04:20.523704   56166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 22:04:20.523744   56166 main.go:141] libmachine: Decoding PEM data...
	I1011 22:04:20.523770   56166 main.go:141] libmachine: Parsing certificate...
	I1011 22:04:20.523837   56166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 22:04:20.523863   56166 main.go:141] libmachine: Decoding PEM data...
	I1011 22:04:20.523883   56166 main.go:141] libmachine: Parsing certificate...
	I1011 22:04:20.523903   56166 main.go:141] libmachine: Running pre-create checks...
	I1011 22:04:20.523920   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .PreCreateCheck
	I1011 22:04:20.524278   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetConfigRaw
	I1011 22:04:20.524692   56166 main.go:141] libmachine: Creating machine...
	I1011 22:04:20.524710   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .Create
	I1011 22:04:20.524803   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Creating KVM machine...
	I1011 22:04:20.525901   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found existing default KVM network
	I1011 22:04:20.526967   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:20.526782   56926 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:07:76:68} reservation:<nil>}
	I1011 22:04:20.527529   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:20.527443   56926 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:b5:b3} reservation:<nil>}
	I1011 22:04:20.528361   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:20.528278   56926 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027ad50}
	I1011 22:04:20.528380   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | created network xml: 
	I1011 22:04:20.528392   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | <network>
	I1011 22:04:20.528400   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |   <name>mk-kubernetes-upgrade-370171</name>
	I1011 22:04:20.528414   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |   <dns enable='no'/>
	I1011 22:04:20.528427   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |   
	I1011 22:04:20.528437   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1011 22:04:20.528447   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |     <dhcp>
	I1011 22:04:20.528467   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1011 22:04:20.528483   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |     </dhcp>
	I1011 22:04:20.528524   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |   </ip>
	I1011 22:04:20.528549   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG |   
	I1011 22:04:20.528565   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | </network>
	I1011 22:04:20.528576   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | 
	I1011 22:04:20.533598   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | trying to create private KVM network mk-kubernetes-upgrade-370171 192.168.61.0/24...
	I1011 22:04:20.607083   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171 ...
	I1011 22:04:20.607113   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 22:04:20.607130   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | private KVM network mk-kubernetes-upgrade-370171 192.168.61.0/24 created
	I1011 22:04:20.607145   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 22:04:20.607156   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:20.606951   56926 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:04:20.840633   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:20.840511   56926 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa...
	I1011 22:04:21.174027   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:21.173895   56926 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/kubernetes-upgrade-370171.rawdisk...
	I1011 22:04:21.174059   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Writing magic tar header
	I1011 22:04:21.174077   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Writing SSH key tar header
	I1011 22:04:21.174095   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:21.173997   56926 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171 ...
	I1011 22:04:21.174135   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171
	I1011 22:04:21.174149   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 22:04:21.174162   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171 (perms=drwx------)
	I1011 22:04:21.174181   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:04:21.174204   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 22:04:21.174217   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 22:04:21.174229   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 22:04:21.174242   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home/jenkins
	I1011 22:04:21.174253   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Checking permissions on dir: /home
	I1011 22:04:21.174267   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 22:04:21.174276   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Skipping /home - not owner
	I1011 22:04:21.174300   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 22:04:21.174317   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 22:04:21.174327   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 22:04:21.174335   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Creating domain...
	I1011 22:04:21.175405   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) define libvirt domain using xml: 
	I1011 22:04:21.175424   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) <domain type='kvm'>
	I1011 22:04:21.175438   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <name>kubernetes-upgrade-370171</name>
	I1011 22:04:21.175447   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <memory unit='MiB'>2200</memory>
	I1011 22:04:21.175455   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <vcpu>2</vcpu>
	I1011 22:04:21.175468   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <features>
	I1011 22:04:21.175479   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <acpi/>
	I1011 22:04:21.175499   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <apic/>
	I1011 22:04:21.175511   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <pae/>
	I1011 22:04:21.175516   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     
	I1011 22:04:21.175523   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   </features>
	I1011 22:04:21.175529   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <cpu mode='host-passthrough'>
	I1011 22:04:21.175538   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   
	I1011 22:04:21.175547   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   </cpu>
	I1011 22:04:21.175558   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <os>
	I1011 22:04:21.175568   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <type>hvm</type>
	I1011 22:04:21.175578   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <boot dev='cdrom'/>
	I1011 22:04:21.175586   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <boot dev='hd'/>
	I1011 22:04:21.175596   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <bootmenu enable='no'/>
	I1011 22:04:21.175606   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   </os>
	I1011 22:04:21.175613   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   <devices>
	I1011 22:04:21.175628   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <disk type='file' device='cdrom'>
	I1011 22:04:21.175647   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/boot2docker.iso'/>
	I1011 22:04:21.175657   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <target dev='hdc' bus='scsi'/>
	I1011 22:04:21.175665   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <readonly/>
	I1011 22:04:21.175675   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </disk>
	I1011 22:04:21.175684   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <disk type='file' device='disk'>
	I1011 22:04:21.175696   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 22:04:21.175713   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/kubernetes-upgrade-370171.rawdisk'/>
	I1011 22:04:21.175724   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <target dev='hda' bus='virtio'/>
	I1011 22:04:21.175740   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </disk>
	I1011 22:04:21.175747   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <interface type='network'>
	I1011 22:04:21.175753   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <source network='mk-kubernetes-upgrade-370171'/>
	I1011 22:04:21.175763   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <model type='virtio'/>
	I1011 22:04:21.175768   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </interface>
	I1011 22:04:21.175773   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <interface type='network'>
	I1011 22:04:21.175778   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <source network='default'/>
	I1011 22:04:21.175783   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <model type='virtio'/>
	I1011 22:04:21.175788   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </interface>
	I1011 22:04:21.175795   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <serial type='pty'>
	I1011 22:04:21.175799   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <target port='0'/>
	I1011 22:04:21.175805   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </serial>
	I1011 22:04:21.175810   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <console type='pty'>
	I1011 22:04:21.175817   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <target type='serial' port='0'/>
	I1011 22:04:21.175822   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </console>
	I1011 22:04:21.175826   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     <rng model='virtio'>
	I1011 22:04:21.175857   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)       <backend model='random'>/dev/random</backend>
	I1011 22:04:21.175882   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     </rng>
	I1011 22:04:21.175893   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     
	I1011 22:04:21.175902   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)     
	I1011 22:04:21.175911   56166 main.go:141] libmachine: (kubernetes-upgrade-370171)   </devices>
	I1011 22:04:21.175921   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) </domain>
	I1011 22:04:21.175932   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) 
	I1011 22:04:21.182947   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:c8:b4:0b in network default
	I1011 22:04:21.183592   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Ensuring networks are active...
	I1011 22:04:21.183631   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:21.184340   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Ensuring network default is active
	I1011 22:04:21.184682   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Ensuring network mk-kubernetes-upgrade-370171 is active
	I1011 22:04:21.185283   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Getting domain xml...
	I1011 22:04:21.186130   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Creating domain...
	I1011 22:04:22.547576   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Waiting to get IP...
	I1011 22:04:22.548609   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:22.549247   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:22.549269   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:22.549138   56926 retry.go:31] will retry after 296.63022ms: waiting for machine to come up
	I1011 22:04:22.847899   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:22.848497   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:22.848527   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:22.848411   56926 retry.go:31] will retry after 361.644811ms: waiting for machine to come up
	I1011 22:04:23.211945   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:23.212457   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:23.212486   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:23.212412   56926 retry.go:31] will retry after 373.523045ms: waiting for machine to come up
	I1011 22:04:23.588321   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:23.588956   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:23.588984   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:23.588912   56926 retry.go:31] will retry after 608.564903ms: waiting for machine to come up
	I1011 22:04:24.198842   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:24.199405   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:24.199442   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:24.199333   56926 retry.go:31] will retry after 702.050725ms: waiting for machine to come up
	I1011 22:04:24.902551   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:24.903025   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:24.903051   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:24.902982   56926 retry.go:31] will retry after 684.163568ms: waiting for machine to come up
	I1011 22:04:25.589299   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:25.589779   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:25.589826   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:25.589722   56926 retry.go:31] will retry after 1.05905586s: waiting for machine to come up
	I1011 22:04:26.934048   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:26.934644   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:26.934672   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:26.934585   56926 retry.go:31] will retry after 1.127800323s: waiting for machine to come up
	I1011 22:04:28.063919   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:28.064455   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:28.064483   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:28.064380   56926 retry.go:31] will retry after 1.732779219s: waiting for machine to come up
	I1011 22:04:29.799116   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:29.799582   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:29.799613   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:29.799537   56926 retry.go:31] will retry after 2.255704819s: waiting for machine to come up
	I1011 22:04:32.056455   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:32.057077   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:32.057107   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:32.057011   56926 retry.go:31] will retry after 1.876313736s: waiting for machine to come up
	I1011 22:04:33.936043   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:33.936517   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:33.936551   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:33.936473   56926 retry.go:31] will retry after 2.72227071s: waiting for machine to come up
	I1011 22:04:36.660138   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:36.660562   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:36.660582   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:36.660527   56926 retry.go:31] will retry after 3.495381765s: waiting for machine to come up
	I1011 22:04:40.160229   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:40.160752   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find current IP address of domain kubernetes-upgrade-370171 in network mk-kubernetes-upgrade-370171
	I1011 22:04:40.160779   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | I1011 22:04:40.160706   56926 retry.go:31] will retry after 3.97573108s: waiting for machine to come up
	I1011 22:04:44.141045   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.141578   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Found IP for machine: 192.168.61.235
	I1011 22:04:44.141604   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Reserving static IP address...
	I1011 22:04:44.141618   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has current primary IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.141985   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-370171", mac: "52:54:00:da:4d:be", ip: "192.168.61.235"} in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.214591   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Reserved static IP address: 192.168.61.235
	I1011 22:04:44.214632   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Getting to WaitForSSH function...
	I1011 22:04:44.214642   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Waiting for SSH to be available...
	I1011 22:04:44.217430   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.218018   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.218050   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.218216   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Using SSH client type: external
	I1011 22:04:44.218238   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa (-rw-------)
	I1011 22:04:44.218277   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:04:44.218294   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | About to run SSH command:
	I1011 22:04:44.218311   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | exit 0
	I1011 22:04:44.343812   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | SSH cmd err, output: <nil>: 
	I1011 22:04:44.344166   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) KVM machine creation complete!
	I1011 22:04:44.344623   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetConfigRaw
	I1011 22:04:44.345279   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:44.345544   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:44.345783   56166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 22:04:44.345799   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetState
	I1011 22:04:44.347436   56166 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 22:04:44.347453   56166 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 22:04:44.347459   56166 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 22:04:44.347464   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:44.350168   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.350562   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.350609   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.350792   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:44.350944   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.351088   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.351185   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:44.351317   56166 main.go:141] libmachine: Using SSH client type: native
	I1011 22:04:44.351525   56166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1011 22:04:44.351541   56166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 22:04:44.454230   56166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:04:44.454259   56166 main.go:141] libmachine: Detecting the provisioner...
	I1011 22:04:44.454270   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:44.457405   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.457782   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.457815   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.457996   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:44.458227   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.458423   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.458581   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:44.458908   56166 main.go:141] libmachine: Using SSH client type: native
	I1011 22:04:44.459130   56166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1011 22:04:44.459152   56166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 22:04:44.560034   56166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 22:04:44.560133   56166 main.go:141] libmachine: found compatible host: buildroot
	I1011 22:04:44.560150   56166 main.go:141] libmachine: Provisioning with buildroot...
	I1011 22:04:44.560161   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetMachineName
	I1011 22:04:44.560403   56166 buildroot.go:166] provisioning hostname "kubernetes-upgrade-370171"
	I1011 22:04:44.560428   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetMachineName
	I1011 22:04:44.560621   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:44.563614   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.563995   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.564025   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.564176   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:44.564374   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.564521   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.564642   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:44.564824   56166 main.go:141] libmachine: Using SSH client type: native
	I1011 22:04:44.565035   56166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1011 22:04:44.565054   56166 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-370171 && echo "kubernetes-upgrade-370171" | sudo tee /etc/hostname
	I1011 22:04:44.677479   56166 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-370171
	
	I1011 22:04:44.677520   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:44.680266   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.680626   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.680657   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.680849   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:44.681012   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.681169   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:44.681323   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:44.681523   56166 main.go:141] libmachine: Using SSH client type: native
	I1011 22:04:44.681745   56166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1011 22:04:44.681771   56166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-370171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-370171/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-370171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:04:44.787703   56166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:04:44.787734   56166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:04:44.787756   56166 buildroot.go:174] setting up certificates
	I1011 22:04:44.787770   56166 provision.go:84] configureAuth start
	I1011 22:04:44.787783   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetMachineName
	I1011 22:04:44.788074   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetIP
	I1011 22:04:44.790680   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.791068   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.791098   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.791269   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:44.793271   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.793559   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:44.793587   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:44.793698   56166 provision.go:143] copyHostCerts
	I1011 22:04:44.793762   56166 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:04:44.793778   56166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:04:44.793860   56166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:04:44.793943   56166 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:04:44.793951   56166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:04:44.793969   56166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:04:44.794019   56166 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:04:44.794026   56166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:04:44.794042   56166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:04:44.794085   56166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-370171 san=[127.0.0.1 192.168.61.235 kubernetes-upgrade-370171 localhost minikube]
	I1011 22:04:45.031283   56166 provision.go:177] copyRemoteCerts
	I1011 22:04:45.031337   56166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:04:45.031363   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:45.034245   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.034510   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.034551   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.034721   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:45.034921   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.035079   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:45.035255   56166 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa Username:docker}
	I1011 22:04:45.117286   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:04:45.141662   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1011 22:04:45.166607   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:04:45.189959   56166 provision.go:87] duration metric: took 402.177955ms to configureAuth
	I1011 22:04:45.189984   56166 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:04:45.190164   56166 config.go:182] Loaded profile config "kubernetes-upgrade-370171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:04:45.190255   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:45.193452   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.193849   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.193883   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.194043   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:45.194285   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.194475   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.194682   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:45.194841   56166 main.go:141] libmachine: Using SSH client type: native
	I1011 22:04:45.195004   56166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1011 22:04:45.195018   56166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:04:45.405183   56166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:04:45.405219   56166 main.go:141] libmachine: Checking connection to Docker...
	I1011 22:04:45.405231   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetURL
	I1011 22:04:45.406604   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Using libvirt version 6000000
	I1011 22:04:45.408747   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.409088   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.409116   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.409297   56166 main.go:141] libmachine: Docker is up and running!
	I1011 22:04:45.409318   56166 main.go:141] libmachine: Reticulating splines...
	I1011 22:04:45.409325   56166 client.go:171] duration metric: took 24.885652728s to LocalClient.Create
	I1011 22:04:45.409348   56166 start.go:167] duration metric: took 24.885715413s to libmachine.API.Create "kubernetes-upgrade-370171"
	I1011 22:04:45.409364   56166 start.go:293] postStartSetup for "kubernetes-upgrade-370171" (driver="kvm2")
	I1011 22:04:45.409379   56166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:04:45.409401   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:45.409627   56166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:04:45.409657   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:45.411778   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.412200   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.412242   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.412383   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:45.412602   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.412749   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:45.412931   56166 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa Username:docker}
	I1011 22:04:45.493595   56166 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:04:45.498087   56166 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:04:45.498118   56166 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:04:45.498216   56166 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:04:45.498353   56166 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:04:45.498497   56166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:04:45.508359   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:04:45.535571   56166 start.go:296] duration metric: took 126.191059ms for postStartSetup
	I1011 22:04:45.535635   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetConfigRaw
	I1011 22:04:45.536242   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetIP
	I1011 22:04:45.538812   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.539147   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.539176   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.539404   56166 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/config.json ...
	I1011 22:04:45.539599   56166 start.go:128] duration metric: took 25.039847242s to createHost
	I1011 22:04:45.539624   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:45.541855   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.542199   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.542240   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.542376   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:45.542597   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.542784   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.542979   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:45.543194   56166 main.go:141] libmachine: Using SSH client type: native
	I1011 22:04:45.543359   56166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1011 22:04:45.543371   56166 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:04:45.639393   56166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728684285.614758833
	
	I1011 22:04:45.639422   56166 fix.go:216] guest clock: 1728684285.614758833
	I1011 22:04:45.639429   56166 fix.go:229] Guest: 2024-10-11 22:04:45.614758833 +0000 UTC Remote: 2024-10-11 22:04:45.539612961 +0000 UTC m=+92.651450164 (delta=75.145872ms)
	I1011 22:04:45.639454   56166 fix.go:200] guest clock delta is within tolerance: 75.145872ms
	I1011 22:04:45.639458   56166 start.go:83] releasing machines lock for "kubernetes-upgrade-370171", held for 25.13986061s
	I1011 22:04:45.639484   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:45.639759   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetIP
	I1011 22:04:45.642631   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.643044   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.643075   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.643176   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:45.643726   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:45.643879   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:04:45.643968   56166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:04:45.644023   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:45.644090   56166 ssh_runner.go:195] Run: cat /version.json
	I1011 22:04:45.644117   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:04:45.646992   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.647136   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.647342   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.647367   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.647488   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:45.647521   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:45.647551   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:45.647664   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.647724   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:04:45.647804   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:45.647872   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:04:45.647940   56166 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa Username:docker}
	I1011 22:04:45.648051   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:04:45.648196   56166 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa Username:docker}
	I1011 22:04:45.746397   56166 ssh_runner.go:195] Run: systemctl --version
	I1011 22:04:45.753443   56166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:04:45.926865   56166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:04:45.933892   56166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:04:45.933964   56166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:04:45.956532   56166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:04:45.956561   56166 start.go:495] detecting cgroup driver to use...
	I1011 22:04:45.956639   56166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:04:45.974809   56166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:04:45.991915   56166 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:04:45.991985   56166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:04:46.008023   56166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:04:46.022481   56166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:04:46.150759   56166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:04:46.333105   56166 docker.go:233] disabling docker service ...
	I1011 22:04:46.333174   56166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:04:46.347676   56166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:04:46.361195   56166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:04:46.492679   56166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:04:46.620341   56166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:04:46.636203   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:04:46.655921   56166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:04:46.655981   56166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:04:46.666506   56166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:04:46.666568   56166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:04:46.677167   56166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:04:46.687773   56166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:04:46.697904   56166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:04:46.708756   56166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:04:46.717928   56166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:04:46.717977   56166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:04:46.730480   56166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:04:46.740361   56166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:04:46.857363   56166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:04:46.965112   56166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:04:46.965211   56166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:04:46.971118   56166 start.go:563] Will wait 60s for crictl version
	I1011 22:04:46.971176   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:46.975516   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:04:47.021181   56166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:04:47.021264   56166 ssh_runner.go:195] Run: crio --version
	I1011 22:04:47.050339   56166 ssh_runner.go:195] Run: crio --version
	I1011 22:04:47.079467   56166 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:04:47.080849   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetIP
	I1011 22:04:47.083777   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:47.084205   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:04:36 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:04:47.084235   56166 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:04:47.084480   56166 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:04:47.088833   56166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:04:47.102326   56166 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-370171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-370171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:04:47.102472   56166 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:04:47.102550   56166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:04:47.137349   56166 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:04:47.137425   56166 ssh_runner.go:195] Run: which lz4
	I1011 22:04:47.141899   56166 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:04:47.146107   56166 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:04:47.146138   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:04:48.797221   56166 crio.go:462] duration metric: took 1.65534357s to copy over tarball
	I1011 22:04:48.797310   56166 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:04:51.388977   56166 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.591631045s)
	I1011 22:04:51.389007   56166 crio.go:469] duration metric: took 2.591752984s to extract the tarball
	I1011 22:04:51.389017   56166 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:04:51.432127   56166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:04:51.479982   56166 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:04:51.480007   56166 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:04:51.480046   56166 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:04:51.480073   56166 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:51.480085   56166 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.480114   56166 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:51.480154   56166 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:51.480195   56166 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:51.480234   56166 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:04:51.480247   56166 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:04:51.481875   56166 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:51.481884   56166 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:04:51.481898   56166 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:51.481875   56166 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:51.481884   56166 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:04:51.481925   56166 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.481929   56166 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:04:51.482248   56166 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:51.664473   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:04:51.673243   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.689323   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:51.716975   56166 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:04:51.717033   56166 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:04:51.717079   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:51.729850   56166 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:04:51.729897   56166 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.729945   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:51.755072   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:04:51.755083   56166 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:04:51.755125   56166 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:51.755071   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.755164   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:51.760517   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:51.768093   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:51.825541   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:51.825607   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:04:51.825541   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.846014   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:51.853328   56166 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:04:51.853376   56166 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:51.853419   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:51.871500   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:04:51.939572   56166 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:04:51.939622   56166 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:51.939672   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:51.961442   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:04:51.961443   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:51.961503   56166 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:04:51.961549   56166 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:51.961562   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:51.961567   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:04:51.961590   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:52.037016   56166 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:04:52.037066   56166 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:04:52.037097   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:52.037109   56166 ssh_runner.go:195] Run: which crictl
	I1011 22:04:52.037169   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:52.065228   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:04:52.080539   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:04:52.092072   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:52.092107   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:04:52.143254   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:52.143301   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:04:52.146401   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:52.195901   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:04:52.195994   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:04:52.248870   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:04:52.248870   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:04:52.248955   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:04:52.276213   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:04:52.342877   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:04:52.342894   56166 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:04:52.342909   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:04:52.376422   56166 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:04:52.596357   56166 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:04:52.741839   56166 cache_images.go:92] duration metric: took 1.261814851s to LoadCachedImages
	W1011 22:04:52.741945   56166 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:04:52.741961   56166 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.20.0 crio true true} ...
	I1011 22:04:52.742078   56166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-370171 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-370171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:04:52.742158   56166 ssh_runner.go:195] Run: crio config
	I1011 22:04:52.792243   56166 cni.go:84] Creating CNI manager for ""
	I1011 22:04:52.792266   56166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:04:52.792275   56166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:04:52.792293   56166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-370171 NodeName:kubernetes-upgrade-370171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:04:52.792447   56166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-370171"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:04:52.792506   56166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:04:52.802991   56166 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:04:52.803058   56166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:04:52.812938   56166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1011 22:04:52.831552   56166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:04:52.851905   56166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1011 22:04:52.872561   56166 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1011 22:04:52.876618   56166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:04:52.889357   56166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:04:53.001060   56166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:04:53.018848   56166 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171 for IP: 192.168.61.235
	I1011 22:04:53.018871   56166 certs.go:194] generating shared ca certs ...
	I1011 22:04:53.018891   56166 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.019049   56166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:04:53.019104   56166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:04:53.019118   56166 certs.go:256] generating profile certs ...
	I1011 22:04:53.019189   56166 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.key
	I1011 22:04:53.019219   56166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.crt with IP's: []
	I1011 22:04:53.338224   56166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.crt ...
	I1011 22:04:53.338255   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.crt: {Name:mk9f371d9f745c3ee8d4410212970509aff8acd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.338442   56166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.key ...
	I1011 22:04:53.338465   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.key: {Name:mkb67dbfc3fa56626f75039f249a1cadb95b14de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.338573   56166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.key.69a1eb35
	I1011 22:04:53.338603   56166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.crt.69a1eb35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.235]
	I1011 22:04:53.429369   56166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.crt.69a1eb35 ...
	I1011 22:04:53.429404   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.crt.69a1eb35: {Name:mkaac61b7601e36d4aee76f3d7e6422c5a2c57d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.429591   56166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.key.69a1eb35 ...
	I1011 22:04:53.429610   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.key.69a1eb35: {Name:mk2caaa576c1b1a43f3ff9fc87263a1f1c109e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.429707   56166 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.crt.69a1eb35 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.crt
	I1011 22:04:53.429800   56166 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.key.69a1eb35 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.key
	I1011 22:04:53.429879   56166 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.key
	I1011 22:04:53.429900   56166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.crt with IP's: []
	I1011 22:04:53.641512   56166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.crt ...
	I1011 22:04:53.641542   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.crt: {Name:mk6951663ac2aa708600f6d3f0d4d9bab2fe61e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.641717   56166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.key ...
	I1011 22:04:53.641734   56166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.key: {Name:mkad5de6de6546c91d3b6363121480fab26edced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:04:53.641930   56166 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:04:53.641982   56166 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:04:53.641997   56166 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:04:53.642033   56166 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:04:53.642066   56166 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:04:53.642096   56166 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:04:53.642153   56166 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:04:53.642785   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:04:53.670832   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:04:53.701254   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:04:53.730219   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:04:53.757611   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1011 22:04:53.783827   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:04:53.808653   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:04:53.837439   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:04:53.865183   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:04:53.893460   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:04:53.926258   56166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:04:53.974978   56166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:04:54.000427   56166 ssh_runner.go:195] Run: openssl version
	I1011 22:04:54.007387   56166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:04:54.021746   56166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:04:54.026593   56166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:04:54.026673   56166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:04:54.032780   56166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:04:54.047876   56166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:04:54.062109   56166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:04:54.066897   56166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:04:54.066949   56166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:04:54.073257   56166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:04:54.084041   56166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:04:54.094767   56166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:04:54.099095   56166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:04:54.099152   56166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:04:54.105190   56166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:04:54.117284   56166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:04:54.121447   56166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 22:04:54.121508   56166 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-370171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-370171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:04:54.121591   56166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:04:54.121638   56166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:04:54.159239   56166 cri.go:89] found id: ""
	I1011 22:04:54.159320   56166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:04:54.169516   56166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:04:54.182701   56166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:04:54.193524   56166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:04:54.193544   56166 kubeadm.go:157] found existing configuration files:
	
	I1011 22:04:54.193585   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:04:54.205635   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:04:54.205712   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:04:54.214864   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:04:54.224683   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:04:54.224748   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:04:54.234510   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:04:54.243745   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:04:54.243815   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:04:54.252981   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:04:54.261671   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:04:54.261723   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:04:54.273982   56166 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:04:54.406436   56166 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:04:54.406552   56166 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:04:54.564879   56166 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:04:54.565052   56166 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:04:54.565201   56166 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:04:54.753947   56166 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:04:54.875732   56166 out.go:235]   - Generating certificates and keys ...
	I1011 22:04:54.875880   56166 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:04:54.875970   56166 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:04:54.876072   56166 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 22:04:55.103566   56166 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 22:04:55.217219   56166 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 22:04:55.292649   56166 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 22:04:55.388295   56166 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 22:04:55.388691   56166 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-370171 localhost] and IPs [192.168.61.235 127.0.0.1 ::1]
	I1011 22:04:55.804579   56166 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 22:04:55.804918   56166 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-370171 localhost] and IPs [192.168.61.235 127.0.0.1 ::1]
	I1011 22:04:56.000515   56166 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 22:04:56.141092   56166 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 22:04:56.255815   56166 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 22:04:56.256121   56166 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:04:56.350002   56166 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:04:56.477516   56166 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:04:56.657691   56166 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:04:56.812807   56166 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:04:56.833051   56166 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:04:56.833185   56166 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:04:56.833242   56166 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:04:56.953115   56166 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:04:56.955005   56166 out.go:235]   - Booting up control plane ...
	I1011 22:04:56.955133   56166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:04:56.963267   56166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:04:56.964413   56166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:04:56.965278   56166 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:04:56.969725   56166 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:05:36.963051   56166 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:05:36.963593   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:05:36.963849   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:05:41.964061   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:05:41.964243   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:05:51.963289   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:05:51.963485   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:06:11.963233   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:06:11.963425   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:06:51.964896   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:06:51.965203   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:06:51.965238   56166 kubeadm.go:310] 
	I1011 22:06:51.965306   56166 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:06:51.965369   56166 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:06:51.965382   56166 kubeadm.go:310] 
	I1011 22:06:51.965455   56166 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:06:51.965536   56166 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:06:51.965690   56166 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:06:51.965704   56166 kubeadm.go:310] 
	I1011 22:06:51.965855   56166 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:06:51.965926   56166 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:06:51.965976   56166 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:06:51.965987   56166 kubeadm.go:310] 
	I1011 22:06:51.966147   56166 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:06:51.966263   56166 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:06:51.966283   56166 kubeadm.go:310] 
	I1011 22:06:51.966458   56166 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:06:51.966579   56166 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:06:51.966703   56166 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:06:51.966816   56166 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:06:51.966830   56166 kubeadm.go:310] 
	I1011 22:06:51.967026   56166 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:06:51.967145   56166 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:06:51.967250   56166 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:06:51.967382   56166 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-370171 localhost] and IPs [192.168.61.235 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-370171 localhost] and IPs [192.168.61.235 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-370171 localhost] and IPs [192.168.61.235 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-370171 localhost] and IPs [192.168.61.235 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:06:51.967422   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:06:53.697147   56166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.729699856s)
	I1011 22:06:53.697227   56166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:06:53.711275   56166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:06:53.721525   56166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:06:53.721551   56166 kubeadm.go:157] found existing configuration files:
	
	I1011 22:06:53.721604   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:06:53.731885   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:06:53.731960   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:06:53.741557   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:06:53.750461   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:06:53.750517   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:06:53.759802   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:06:53.768381   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:06:53.768440   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:06:53.777516   56166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:06:53.786547   56166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:06:53.786599   56166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:06:53.795658   56166 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:06:53.865543   56166 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:06:53.865680   56166 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:06:54.003719   56166 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:06:54.003905   56166 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:06:54.004035   56166 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:06:54.195798   56166 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:06:54.197812   56166 out.go:235]   - Generating certificates and keys ...
	I1011 22:06:54.197921   56166 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:06:54.198013   56166 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:06:54.198117   56166 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:06:54.198183   56166 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:06:54.198300   56166 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:06:54.198386   56166 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:06:54.198478   56166 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:06:54.198565   56166 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:06:54.198688   56166 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:06:54.198760   56166 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:06:54.198795   56166 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:06:54.198845   56166 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:06:54.356585   56166 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:06:54.531161   56166 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:06:54.834908   56166 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:06:55.011146   56166 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:06:55.031126   56166 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:06:55.032026   56166 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:06:55.032075   56166 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:06:55.159277   56166 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:06:55.161149   56166 out.go:235]   - Booting up control plane ...
	I1011 22:06:55.161270   56166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:06:55.165381   56166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:06:55.166313   56166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:06:55.167794   56166 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:06:55.179575   56166 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:07:35.182230   56166 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:07:35.182812   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:07:35.183020   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:07:40.183643   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:07:40.183879   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:07:50.184398   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:07:50.184629   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:08:10.183986   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:08:10.184212   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:08:50.183799   56166 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:08:50.184163   56166 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:08:50.184188   56166 kubeadm.go:310] 
	I1011 22:08:50.184247   56166 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:08:50.184340   56166 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:08:50.184356   56166 kubeadm.go:310] 
	I1011 22:08:50.184386   56166 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:08:50.184422   56166 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:08:50.184559   56166 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:08:50.184571   56166 kubeadm.go:310] 
	I1011 22:08:50.184697   56166 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:08:50.184743   56166 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:08:50.184808   56166 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:08:50.184828   56166 kubeadm.go:310] 
	I1011 22:08:50.184961   56166 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:08:50.185034   56166 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:08:50.185042   56166 kubeadm.go:310] 
	I1011 22:08:50.185183   56166 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:08:50.185325   56166 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:08:50.185447   56166 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:08:50.185565   56166 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:08:50.185580   56166 kubeadm.go:310] 
	I1011 22:08:50.186156   56166 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:08:50.186306   56166 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:08:50.186373   56166 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:08:50.186435   56166 kubeadm.go:394] duration metric: took 3m56.064932605s to StartCluster
	I1011 22:08:50.186503   56166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:08:50.186560   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:08:50.230242   56166 cri.go:89] found id: ""
	I1011 22:08:50.230280   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.230301   56166 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:08:50.230307   56166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:08:50.230362   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:08:50.264732   56166 cri.go:89] found id: ""
	I1011 22:08:50.264763   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.264772   56166 logs.go:284] No container was found matching "etcd"
	I1011 22:08:50.264779   56166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:08:50.264843   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:08:50.298118   56166 cri.go:89] found id: ""
	I1011 22:08:50.298148   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.298157   56166 logs.go:284] No container was found matching "coredns"
	I1011 22:08:50.298165   56166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:08:50.298221   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:08:50.333410   56166 cri.go:89] found id: ""
	I1011 22:08:50.333451   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.333459   56166 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:08:50.333465   56166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:08:50.333524   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:08:50.369968   56166 cri.go:89] found id: ""
	I1011 22:08:50.369999   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.370009   56166 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:08:50.370015   56166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:08:50.370080   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:08:50.415069   56166 cri.go:89] found id: ""
	I1011 22:08:50.415098   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.415105   56166 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:08:50.415111   56166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:08:50.415173   56166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:08:50.454505   56166 cri.go:89] found id: ""
	I1011 22:08:50.454531   56166 logs.go:282] 0 containers: []
	W1011 22:08:50.454540   56166 logs.go:284] No container was found matching "kindnet"
	I1011 22:08:50.454552   56166 logs.go:123] Gathering logs for kubelet ...
	I1011 22:08:50.454565   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:08:50.510327   56166 logs.go:123] Gathering logs for dmesg ...
	I1011 22:08:50.510365   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:08:50.524793   56166 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:08:50.524820   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:08:50.644006   56166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:08:50.644029   56166 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:08:50.644040   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:08:50.745730   56166 logs.go:123] Gathering logs for container status ...
	I1011 22:08:50.745765   56166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1011 22:08:50.786945   56166 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:08:50.787021   56166 out.go:270] * 
	* 
	W1011 22:08:50.787079   56166 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:08:50.787096   56166 out.go:270] * 
	* 
	W1011 22:08:50.787934   56166 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:08:50.791058   56166 out.go:201] 
	W1011 22:08:50.792248   56166 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:08:50.792286   56166 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:08:50.792304   56166 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:08:50.793689   56166 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-370171
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-370171: (6.292065896s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-370171 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-370171 status --format={{.Host}}: exit status 7 (62.029645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m58.6823826s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-370171 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (121.682101ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-370171] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-370171
	    minikube start -p kubernetes-upgrade-370171 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3701712 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-370171 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-370171 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.893487767s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-11 22:11:52.980164259 +0000 UTC m=+4432.994521781
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-370171 -n kubernetes-upgrade-370171
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-370171 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-370171 logs -n 25: (1.828404575s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-320768                | NoKubernetes-320768       | jenkins | v1.34.0 | 11 Oct 24 22:07 UTC | 11 Oct 24 22:07 UTC |
	| start   | -p cert-expiration-993898             | cert-expiration-993898    | jenkins | v1.34.0 | 11 Oct 24 22:07 UTC | 11 Oct 24 22:08 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-318346                       | pause-318346              | jenkins | v1.34.0 | 11 Oct 24 22:07 UTC | 11 Oct 24 22:09 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-604134             | running-upgrade-604134    | jenkins | v1.34.0 | 11 Oct 24 22:07 UTC | 11 Oct 24 22:07 UTC |
	| start   | -p force-systemd-flag-906123          | force-systemd-flag-906123 | jenkins | v1.34.0 | 11 Oct 24 22:07 UTC | 11 Oct 24 22:10 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-370171          | kubernetes-upgrade-370171 | jenkins | v1.34.0 | 11 Oct 24 22:08 UTC | 11 Oct 24 22:08 UTC |
	| start   | -p kubernetes-upgrade-370171          | kubernetes-upgrade-370171 | jenkins | v1.34.0 | 11 Oct 24 22:08 UTC | 11 Oct 24 22:10 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-318346                       | pause-318346              | jenkins | v1.34.0 | 11 Oct 24 22:09 UTC | 11 Oct 24 22:09 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-318346                       | pause-318346              | jenkins | v1.34.0 | 11 Oct 24 22:09 UTC | 11 Oct 24 22:09 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-318346                       | pause-318346              | jenkins | v1.34.0 | 11 Oct 24 22:09 UTC | 11 Oct 24 22:09 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-318346                       | pause-318346              | jenkins | v1.34.0 | 11 Oct 24 22:09 UTC | 11 Oct 24 22:09 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-318346                       | pause-318346              | jenkins | v1.34.0 | 11 Oct 24 22:09 UTC | 11 Oct 24 22:09 UTC |
	| ssh     | force-systemd-flag-906123 ssh cat     | force-systemd-flag-906123 | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:10 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-906123          | force-systemd-flag-906123 | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:10 UTC |
	| start   | -p cert-options-413599                | cert-options-413599       | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:10 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p auto-579309 --memory=3072          | auto-579309               | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-413599 ssh               | cert-options-413599       | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:10 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-413599 -- sudo        | cert-options-413599       | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-413599                | cert-options-413599       | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:10 UTC |
	| start   | -p kindnet-579309                     | kindnet-579309            | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-370171          | kubernetes-upgrade-370171 | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-370171          | kubernetes-upgrade-370171 | jenkins | v1.34.0 | 11 Oct 24 22:10 UTC | 11 Oct 24 22:11 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-993898             | cert-expiration-993898    | jenkins | v1.34.0 | 11 Oct 24 22:11 UTC | 11 Oct 24 22:11 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-993898             | cert-expiration-993898    | jenkins | v1.34.0 | 11 Oct 24 22:11 UTC | 11 Oct 24 22:11 UTC |
	| start   | -p calico-579309 --memory=3072        | calico-579309             | jenkins | v1.34.0 | 11 Oct 24 22:11 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:11:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:11:51.713901   62957 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:11:51.714029   62957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:11:51.714040   62957 out.go:358] Setting ErrFile to fd 2...
	I1011 22:11:51.714047   62957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:11:51.714303   62957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:11:51.715056   62957 out.go:352] Setting JSON to false
	I1011 22:11:51.715980   62957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6857,"bootTime":1728677855,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:11:51.716082   62957 start.go:139] virtualization: kvm guest
	I1011 22:11:51.717853   62957 out.go:177] * [calico-579309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:11:51.719410   62957 notify.go:220] Checking for updates...
	I1011 22:11:51.719433   62957 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:11:51.720655   62957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:11:51.721804   62957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:11:51.723069   62957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:11:51.724208   62957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:11:51.725397   62957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:11:51.726987   62957 config.go:182] Loaded profile config "auto-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:11:51.727103   62957 config.go:182] Loaded profile config "kindnet-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:11:51.727203   62957 config.go:182] Loaded profile config "kubernetes-upgrade-370171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:11:51.727398   62957 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:11:51.765816   62957 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 22:11:51.766967   62957 start.go:297] selected driver: kvm2
	I1011 22:11:51.766981   62957 start.go:901] validating driver "kvm2" against <nil>
	I1011 22:11:51.766991   62957 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:11:51.767720   62957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:11:51.767798   62957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:11:51.783263   62957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:11:51.783309   62957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 22:11:51.783627   62957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:11:51.783669   62957 cni.go:84] Creating CNI manager for "calico"
	I1011 22:11:51.783677   62957 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1011 22:11:51.783739   62957 start.go:340] cluster config:
	{Name:calico-579309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-579309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:11:51.783881   62957 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:11:51.785422   62957 out.go:177] * Starting "calico-579309" primary control-plane node in "calico-579309" cluster
	I1011 22:11:51.786732   62957 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:11:51.786771   62957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 22:11:51.786783   62957 cache.go:56] Caching tarball of preloaded images
	I1011 22:11:51.786959   62957 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:11:51.786977   62957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 22:11:51.787115   62957 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/config.json ...
	I1011 22:11:51.787143   62957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/config.json: {Name:mk73bbe19222bbb9c867115cbb33fe3130bdf1f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:11:51.787317   62957 start.go:360] acquireMachinesLock for calico-579309: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:11:51.787350   62957 start.go:364] duration metric: took 17.158µs to acquireMachinesLock for "calico-579309"
	I1011 22:11:51.787371   62957 start.go:93] Provisioning new machine with config: &{Name:calico-579309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:calico-579309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:11:51.787450   62957 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 22:11:51.435106   62447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:11:51.449214   62447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:11:51.469624   62447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:11:51.469704   62447 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 22:11:51.469723   62447 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 22:11:51.479248   62447 system_pods.go:59] 6 kube-system pods found
	I1011 22:11:51.479274   62447 system_pods.go:61] "coredns-7c65d6cfc9-8cnv7" [265fcadf-63d8-412d-95bd-88ea65a8d0b8] Running
	I1011 22:11:51.479282   62447 system_pods.go:61] "coredns-7c65d6cfc9-cmnln" [ebb55b19-2da3-4363-956c-69d5904cf8ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:11:51.479289   62447 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-370171" [eb39dd66-6588-483c-a6bc-bb01a2d51ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:11:51.479296   62447 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-370171" [033bf9db-2486-4a0e-95b0-2c94659c371f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:11:51.479302   62447 system_pods.go:61] "kube-proxy-nnghj" [fdd3cac6-023d-493f-b2d0-eb014ab9df37] Running
	I1011 22:11:51.479310   62447 system_pods.go:61] "storage-provisioner" [c9808131-f2d9-4e98-bda1-2ed739a56d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:11:51.479319   62447 system_pods.go:74] duration metric: took 9.678404ms to wait for pod list to return data ...
	I1011 22:11:51.479331   62447 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:11:51.483081   62447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:11:51.483107   62447 node_conditions.go:123] node cpu capacity is 2
	I1011 22:11:51.483119   62447 node_conditions.go:105] duration metric: took 3.782522ms to run NodePressure ...
	I1011 22:11:51.483137   62447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:11:51.806556   62447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:11:51.822363   62447 ops.go:34] apiserver oom_adj: -16
	I1011 22:11:51.822382   62447 kubeadm.go:597] duration metric: took 18.952730016s to restartPrimaryControlPlane
	I1011 22:11:51.822393   62447 kubeadm.go:394] duration metric: took 19.27130414s to StartCluster
	I1011 22:11:51.822411   62447 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:11:51.822484   62447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:11:51.823884   62447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:11:51.824094   62447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:11:51.824670   62447 config.go:182] Loaded profile config "kubernetes-upgrade-370171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:11:51.824722   62447 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:11:51.824803   62447 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-370171"
	I1011 22:11:51.824823   62447 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-370171"
	W1011 22:11:51.824834   62447 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:11:51.824863   62447 host.go:66] Checking if "kubernetes-upgrade-370171" exists ...
	I1011 22:11:51.825166   62447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:11:51.825191   62447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:11:51.825241   62447 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-370171"
	I1011 22:11:51.825265   62447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-370171"
	I1011 22:11:51.825459   62447 out.go:177] * Verifying Kubernetes components...
	I1011 22:11:51.825676   62447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:11:51.825720   62447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:11:51.826702   62447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:11:51.845618   62447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I1011 22:11:51.846087   62447 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:11:51.846746   62447 main.go:141] libmachine: Using API Version  1
	I1011 22:11:51.846774   62447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:11:51.847329   62447 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:11:51.847334   62447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I1011 22:11:51.847557   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetState
	I1011 22:11:51.847852   62447 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:11:51.848312   62447 main.go:141] libmachine: Using API Version  1
	I1011 22:11:51.848336   62447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:11:51.848736   62447 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:11:51.849447   62447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:11:51.849472   62447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:11:51.856690   62447 kapi.go:59] client config for kubernetes-upgrade-370171: &rest.Config{Host:"https://192.168.61.235:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.crt", KeyFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kubernetes-upgrade-370171/client.key", CAFile:"/home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 22:11:51.856901   62447 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-370171"
	W1011 22:11:51.856910   62447 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:11:51.856929   62447 host.go:66] Checking if "kubernetes-upgrade-370171" exists ...
	I1011 22:11:51.857209   62447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:11:51.857244   62447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:11:51.868767   62447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I1011 22:11:51.869932   62447 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:11:51.871127   62447 main.go:141] libmachine: Using API Version  1
	I1011 22:11:51.871154   62447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:11:51.871510   62447 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:11:51.871672   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetState
	I1011 22:11:51.875643   62447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I1011 22:11:51.876103   62447 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:11:51.876631   62447 main.go:141] libmachine: Using API Version  1
	I1011 22:11:51.876650   62447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:11:51.877019   62447 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:11:51.877596   62447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:11:51.877642   62447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:11:51.884582   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:11:51.886961   62447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:11:51.888341   62447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:11:51.888361   62447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:11:51.888382   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:11:51.893239   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:11:51.895758   62447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1011 22:11:51.896251   62447 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:11:51.896716   62447 main.go:141] libmachine: Using API Version  1
	I1011 22:11:51.896733   62447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:11:51.897329   62447 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:11:51.897480   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetState
	I1011 22:11:51.899303   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .DriverName
	I1011 22:11:51.899557   62447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:11:51.899571   62447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:11:51.899584   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHHostname
	I1011 22:11:51.902645   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:11:51.915288   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:09:53 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:11:51.915318   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:11:51.915360   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:4d:be", ip: ""} in network mk-kubernetes-upgrade-370171: {Iface:virbr3 ExpiryTime:2024-10-11 23:09:53 +0000 UTC Type:0 Mac:52:54:00:da:4d:be Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:kubernetes-upgrade-370171 Clientid:01:52:54:00:da:4d:be}
	I1011 22:11:51.915379   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | domain kubernetes-upgrade-370171 has defined IP address 192.168.61.235 and MAC address 52:54:00:da:4d:be in network mk-kubernetes-upgrade-370171
	I1011 22:11:51.915642   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:11:51.915733   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHPort
	I1011 22:11:51.915806   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:11:51.915928   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHKeyPath
	I1011 22:11:51.915952   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:11:51.916129   62447 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa Username:docker}
	I1011 22:11:51.916792   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .GetSSHUsername
	I1011 22:11:51.916968   62447 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/kubernetes-upgrade-370171/id_rsa Username:docker}
	I1011 22:11:52.042752   62447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:11:52.072252   62447 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:11:52.072348   62447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:11:52.089918   62447 api_server.go:72] duration metric: took 265.793314ms to wait for apiserver process to appear ...
	I1011 22:11:52.089946   62447 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:11:52.089967   62447 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1011 22:11:52.097958   62447 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1011 22:11:52.103554   62447 api_server.go:141] control plane version: v1.31.1
	I1011 22:11:52.103586   62447 api_server.go:131] duration metric: took 13.631835ms to wait for apiserver health ...
	I1011 22:11:52.103596   62447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:11:52.110030   62447 system_pods.go:59] 6 kube-system pods found
	I1011 22:11:52.110055   62447 system_pods.go:61] "coredns-7c65d6cfc9-8cnv7" [265fcadf-63d8-412d-95bd-88ea65a8d0b8] Running
	I1011 22:11:52.110063   62447 system_pods.go:61] "coredns-7c65d6cfc9-cmnln" [ebb55b19-2da3-4363-956c-69d5904cf8ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:11:52.110071   62447 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-370171" [eb39dd66-6588-483c-a6bc-bb01a2d51ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:11:52.110079   62447 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-370171" [033bf9db-2486-4a0e-95b0-2c94659c371f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:11:52.110083   62447 system_pods.go:61] "kube-proxy-nnghj" [fdd3cac6-023d-493f-b2d0-eb014ab9df37] Running
	I1011 22:11:52.110087   62447 system_pods.go:61] "storage-provisioner" [c9808131-f2d9-4e98-bda1-2ed739a56d85] Running
	I1011 22:11:52.110092   62447 system_pods.go:74] duration metric: took 6.490824ms to wait for pod list to return data ...
	I1011 22:11:52.110101   62447 kubeadm.go:582] duration metric: took 285.981967ms to wait for: map[apiserver:true system_pods:true]
	I1011 22:11:52.110111   62447 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:11:52.116528   62447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:11:52.116551   62447 node_conditions.go:123] node cpu capacity is 2
	I1011 22:11:52.116562   62447 node_conditions.go:105] duration metric: took 6.441922ms to run NodePressure ...
	I1011 22:11:52.116574   62447 start.go:241] waiting for startup goroutines ...
	I1011 22:11:52.176080   62447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:11:52.188120   62447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:11:52.449048   62447 main.go:141] libmachine: Making call to close driver server
	I1011 22:11:52.449076   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .Close
	I1011 22:11:52.449378   62447 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:11:52.449396   62447 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:11:52.449406   62447 main.go:141] libmachine: Making call to close driver server
	I1011 22:11:52.449415   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .Close
	I1011 22:11:52.449769   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Closing plugin on server side
	I1011 22:11:52.449791   62447 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:11:52.449802   62447 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:11:52.457718   62447 main.go:141] libmachine: Making call to close driver server
	I1011 22:11:52.457735   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .Close
	I1011 22:11:52.457999   62447 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:11:52.458013   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Closing plugin on server side
	I1011 22:11:52.458018   62447 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:11:52.906002   62447 main.go:141] libmachine: Making call to close driver server
	I1011 22:11:52.906047   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .Close
	I1011 22:11:52.906420   62447 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:11:52.906434   62447 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:11:52.906444   62447 main.go:141] libmachine: Making call to close driver server
	I1011 22:11:52.906451   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) Calling .Close
	I1011 22:11:52.906448   62447 main.go:141] libmachine: (kubernetes-upgrade-370171) DBG | Closing plugin on server side
	I1011 22:11:52.906677   62447 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:11:52.906691   62447 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:11:52.909702   62447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1011 22:11:52.911020   62447 addons.go:510] duration metric: took 1.086300937s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1011 22:11:52.911060   62447 start.go:246] waiting for cluster config update ...
	I1011 22:11:52.911084   62447 start.go:255] writing updated cluster config ...
	I1011 22:11:52.911355   62447 ssh_runner.go:195] Run: rm -f paused
	I1011 22:11:52.965585   62447 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:11:52.967131   62447 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-370171" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.811133340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684713811109732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44d99464-45e8-4e0d-bfba-da3380392e37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.812142703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c3f33c4-3e9d-4c13-a386-cce6a3f113fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.812212911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c3f33c4-3e9d-4c13-a386-cce6a3f113fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.812789786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a1ff977e0c1ea522790c0d785fa92c13b2d80a3785373c51e9aeeb5c70a41f5,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684710727797278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a007b129e073d7218ebb408eef26f5655fa9fef31d1c55d8a016872b12da6076,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684710707078233,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dbcfe0d403bb4f63b69814979c21ffa98601945cbe5fae1d9dba020ad288c,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728684706852882444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d5724abacf3ffc8ec79479391e80bf209ff0ca2cb7dac87181af4261ed48c7,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728684706864243431,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86bd24f84c99d5288d6fe9985e5fcbf560805cbddc77241a88439cf0526f9d52,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728684706829893823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd27f2fa58b7ddde60712d0a5d2b948fda070ddcba5cb53f6ecd7b833bdbed2,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728684703564938738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0400242c4b17068a54f29fd2564e616dc85bcf28e2c8be9e24e8d59c81c499b,PodSandboxId:2d8420ae805f8fd6e3f639fe62c65646334bf38b783b302474b48d650fcf36db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728684698058291759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f
-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a3fabafc883426d9e39225194479824fde023a4f30fac10c2bfcb7d390e675,PodSandboxId:b88cdc007012c53d41e90300afba081ae11ee9a031197d43f955444736c42b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684693176838432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9bc9942bca39b5fe41d4644d0587b8397732b6332a3b53c62987ace39dbfe6,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728684692158030608,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b29252d989927b7a805f289c51ddca274c41dec404edee1c64455646a4cfec8,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684692763264631,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de10295f662b71f7a2a49be2671218b35d5dd6501bf4b43169d3d69ac99ef4d,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728684691963327911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25de8df6051478b4a4955f2b60b7e7a7aa11bc1f03b4c70fa230d2ac8e479896,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728684692020335201,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3e44dfb04b3b7499c5dd79fc79cacae6744345dffc5ed02314cc1abc9a0436,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728684691883149325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4edb2fc0f00c5a67cfc6adb9f9d19866363639acbb6365090128f66b50a87012,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728684691842979586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e71211e0831360f1c48c71704bda7651789350c372835fc726cf552a325e6e,PodSandboxId:d7d8da033db7ab23ef5222d6c9325dd198343a2e294e55e980ea03fd2eec2517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684656842721476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5e9a4e1b08cfe1056c0d0c633743b508eb36f1ec082704dc2ab821725c3ba0,PodSandboxId:21e5bd29ad27dacf5932b739656ec931a2170296e3b9fe83dc561036a544c8d5,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728684656217910078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c3f33c4-3e9d-4c13-a386-cce6a3f113fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.869237748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=678c2f4b-bf9b-4ecd-a178-c18079094027 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.869334949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=678c2f4b-bf9b-4ecd-a178-c18079094027 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.870472533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63bfd668-e9bc-4ae0-b7c0-4aa8335d88e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.870881626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684713870851477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63bfd668-e9bc-4ae0-b7c0-4aa8335d88e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.871533894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a44e212-f670-4f17-ad5c-1327d8d4f9ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.871608390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a44e212-f670-4f17-ad5c-1327d8d4f9ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.871989077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a1ff977e0c1ea522790c0d785fa92c13b2d80a3785373c51e9aeeb5c70a41f5,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684710727797278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a007b129e073d7218ebb408eef26f5655fa9fef31d1c55d8a016872b12da6076,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684710707078233,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dbcfe0d403bb4f63b69814979c21ffa98601945cbe5fae1d9dba020ad288c,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728684706852882444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d5724abacf3ffc8ec79479391e80bf209ff0ca2cb7dac87181af4261ed48c7,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728684706864243431,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86bd24f84c99d5288d6fe9985e5fcbf560805cbddc77241a88439cf0526f9d52,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728684706829893823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd27f2fa58b7ddde60712d0a5d2b948fda070ddcba5cb53f6ecd7b833bdbed2,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728684703564938738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0400242c4b17068a54f29fd2564e616dc85bcf28e2c8be9e24e8d59c81c499b,PodSandboxId:2d8420ae805f8fd6e3f639fe62c65646334bf38b783b302474b48d650fcf36db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728684698058291759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f
-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a3fabafc883426d9e39225194479824fde023a4f30fac10c2bfcb7d390e675,PodSandboxId:b88cdc007012c53d41e90300afba081ae11ee9a031197d43f955444736c42b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684693176838432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9bc9942bca39b5fe41d4644d0587b8397732b6332a3b53c62987ace39dbfe6,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728684692158030608,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b29252d989927b7a805f289c51ddca274c41dec404edee1c64455646a4cfec8,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684692763264631,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de10295f662b71f7a2a49be2671218b35d5dd6501bf4b43169d3d69ac99ef4d,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728684691963327911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25de8df6051478b4a4955f2b60b7e7a7aa11bc1f03b4c70fa230d2ac8e479896,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728684692020335201,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3e44dfb04b3b7499c5dd79fc79cacae6744345dffc5ed02314cc1abc9a0436,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728684691883149325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4edb2fc0f00c5a67cfc6adb9f9d19866363639acbb6365090128f66b50a87012,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728684691842979586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e71211e0831360f1c48c71704bda7651789350c372835fc726cf552a325e6e,PodSandboxId:d7d8da033db7ab23ef5222d6c9325dd198343a2e294e55e980ea03fd2eec2517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684656842721476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5e9a4e1b08cfe1056c0d0c633743b508eb36f1ec082704dc2ab821725c3ba0,PodSandboxId:21e5bd29ad27dacf5932b739656ec931a2170296e3b9fe83dc561036a544c8d5,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728684656217910078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a44e212-f670-4f17-ad5c-1327d8d4f9ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.922945790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74803e11-c51c-4a06-b218-253686c7d613 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.923042210Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74803e11-c51c-4a06-b218-253686c7d613 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.924211036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c19812de-ae85-437b-bbb5-f079c509a064 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.924714499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684713924685676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c19812de-ae85-437b-bbb5-f079c509a064 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.925339661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ec3fe99-8974-415b-8c14-4d57cc16b276 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.925458981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ec3fe99-8974-415b-8c14-4d57cc16b276 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.925864263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a1ff977e0c1ea522790c0d785fa92c13b2d80a3785373c51e9aeeb5c70a41f5,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684710727797278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a007b129e073d7218ebb408eef26f5655fa9fef31d1c55d8a016872b12da6076,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684710707078233,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dbcfe0d403bb4f63b69814979c21ffa98601945cbe5fae1d9dba020ad288c,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728684706852882444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d5724abacf3ffc8ec79479391e80bf209ff0ca2cb7dac87181af4261ed48c7,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728684706864243431,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86bd24f84c99d5288d6fe9985e5fcbf560805cbddc77241a88439cf0526f9d52,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728684706829893823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd27f2fa58b7ddde60712d0a5d2b948fda070ddcba5cb53f6ecd7b833bdbed2,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728684703564938738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0400242c4b17068a54f29fd2564e616dc85bcf28e2c8be9e24e8d59c81c499b,PodSandboxId:2d8420ae805f8fd6e3f639fe62c65646334bf38b783b302474b48d650fcf36db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728684698058291759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f
-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a3fabafc883426d9e39225194479824fde023a4f30fac10c2bfcb7d390e675,PodSandboxId:b88cdc007012c53d41e90300afba081ae11ee9a031197d43f955444736c42b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684693176838432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9bc9942bca39b5fe41d4644d0587b8397732b6332a3b53c62987ace39dbfe6,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728684692158030608,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b29252d989927b7a805f289c51ddca274c41dec404edee1c64455646a4cfec8,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684692763264631,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de10295f662b71f7a2a49be2671218b35d5dd6501bf4b43169d3d69ac99ef4d,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728684691963327911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25de8df6051478b4a4955f2b60b7e7a7aa11bc1f03b4c70fa230d2ac8e479896,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728684692020335201,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3e44dfb04b3b7499c5dd79fc79cacae6744345dffc5ed02314cc1abc9a0436,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728684691883149325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4edb2fc0f00c5a67cfc6adb9f9d19866363639acbb6365090128f66b50a87012,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728684691842979586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e71211e0831360f1c48c71704bda7651789350c372835fc726cf552a325e6e,PodSandboxId:d7d8da033db7ab23ef5222d6c9325dd198343a2e294e55e980ea03fd2eec2517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684656842721476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5e9a4e1b08cfe1056c0d0c633743b508eb36f1ec082704dc2ab821725c3ba0,PodSandboxId:21e5bd29ad27dacf5932b739656ec931a2170296e3b9fe83dc561036a544c8d5,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728684656217910078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ec3fe99-8974-415b-8c14-4d57cc16b276 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.969758471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=809809f7-84df-4ef6-8916-0614f7b9aa98 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.970049327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=809809f7-84df-4ef6-8916-0614f7b9aa98 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.971871705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f648118-a7ce-41b4-ab35-a67458581fa6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.972223441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728684713972203893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f648118-a7ce-41b4-ab35-a67458581fa6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.972931865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96ec1f3a-74d3-4002-8ad9-99b7e4181a08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.973001296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96ec1f3a-74d3-4002-8ad9-99b7e4181a08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:11:53 kubernetes-upgrade-370171 crio[2687]: time="2024-10-11 22:11:53.973625028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a1ff977e0c1ea522790c0d785fa92c13b2d80a3785373c51e9aeeb5c70a41f5,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728684710727797278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a007b129e073d7218ebb408eef26f5655fa9fef31d1c55d8a016872b12da6076,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684710707078233,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dbcfe0d403bb4f63b69814979c21ffa98601945cbe5fae1d9dba020ad288c,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728684706852882444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83d5724abacf3ffc8ec79479391e80bf209ff0ca2cb7dac87181af4261ed48c7,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728684706864243431,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86bd24f84c99d5288d6fe9985e5fcbf560805cbddc77241a88439cf0526f9d52,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728684706829893823,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd27f2fa58b7ddde60712d0a5d2b948fda070ddcba5cb53f6ecd7b833bdbed2,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728684703564938738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0400242c4b17068a54f29fd2564e616dc85bcf28e2c8be9e24e8d59c81c499b,PodSandboxId:2d8420ae805f8fd6e3f639fe62c65646334bf38b783b302474b48d650fcf36db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728684698058291759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f
-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a3fabafc883426d9e39225194479824fde023a4f30fac10c2bfcb7d390e675,PodSandboxId:b88cdc007012c53d41e90300afba081ae11ee9a031197d43f955444736c42b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728684693176838432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9bc9942bca39b5fe41d4644d0587b8397732b6332a3b53c62987ace39dbfe6,PodSandboxId:4bc270f73d42919e7a74a9b13968f6bbc17cdf5407fa960c1f128978ef1e6f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728684692158030608,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9808131-f2d9-4e98-bda1-2ed739a56d85,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b29252d989927b7a805f289c51ddca274c41dec404edee1c64455646a4cfec8,PodSandboxId:ee7156f06777b6920907a93a246f727a2e58c761e9ef1f8685019b272e68fa1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684692763264631,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cmnln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebb55b19-2da3-4363-956c-69d5904cf8ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de10295f662b71f7a2a49be2671218b35d5dd6501bf4b43169d3d69ac99ef4d,PodSandboxId:cf34e98c74211ee182f9af66bdeb6acf2d1736937d016ef42ae63bef4c7e7396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728684691963327911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ae942bc23ded4009ea7af60f4f2169,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25de8df6051478b4a4955f2b60b7e7a7aa11bc1f03b4c70fa230d2ac8e479896,PodSandboxId:1631fbbeb197565bf2febdcd37a651f64f9a80aeed8a72ca8c250b8d1a872d86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728684692020335201,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98620440b2ecba8eb4ae614ca36c3683,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3e44dfb04b3b7499c5dd79fc79cacae6744345dffc5ed02314cc1abc9a0436,PodSandboxId:74c21a8e2f3910efd96251cd1e752d81d529b890e63b4906e5a6703b335d0e03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728684691883149325,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a51880fd3428e6e62a844cf31a3db20,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4edb2fc0f00c5a67cfc6adb9f9d19866363639acbb6365090128f66b50a87012,PodSandboxId:a45e7129453dce7b0ccb163552b72962f4a4587f9a0ca068956ee7c126065f3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728684691842979586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-370171,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649dcb7203b5dd41d5f1c4bdfc1e7483,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e71211e0831360f1c48c71704bda7651789350c372835fc726cf552a325e6e,PodSandboxId:d7d8da033db7ab23ef5222d6c9325dd198343a2e294e55e980ea03fd2eec2517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728684656842721476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8cnv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fcadf-63d8-412d-95bd-88ea65a8d0b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5e9a4e1b08cfe1056c0d0c633743b508eb36f1ec082704dc2ab821725c3ba0,PodSandboxId:21e5bd29ad27dacf5932b739656ec931a2170296e3b9fe83dc561036a544c8d5,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728684656217910078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nnghj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdd3cac6-023d-493f-b2d0-eb014ab9df37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96ec1f3a-74d3-4002-8ad9-99b7e4181a08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a1ff977e0c1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   4bc270f73d429       storage-provisioner
	a007b129e073d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   ee7156f06777b       coredns-7c65d6cfc9-cmnln
	83d5724abacf3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   3                   cf34e98c74211       kube-controller-manager-kubernetes-upgrade-370171
	180dbcfe0d403       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   a45e7129453dc       kube-scheduler-kubernetes-upgrade-370171
	86bd24f84c99d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            3                   74c21a8e2f391       kube-apiserver-kubernetes-upgrade-370171
	7bd27f2fa58b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   10 seconds ago      Running             etcd                      2                   1631fbbeb1975       etcd-kubernetes-upgrade-370171
	f0400242c4b17       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 seconds ago      Running             kube-proxy                1                   2d8420ae805f8       kube-proxy-nnghj
	d7a3fabafc883       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   20 seconds ago      Running             coredns                   1                   b88cdc007012c       coredns-7c65d6cfc9-8cnv7
	7b29252d98992       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Exited              coredns                   1                   ee7156f06777b       coredns-7c65d6cfc9-cmnln
	cb9bc9942bca3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Exited              storage-provisioner       1                   4bc270f73d429       storage-provisioner
	25de8df605147       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Exited              etcd                      1                   1631fbbeb1975       etcd-kubernetes-upgrade-370171
	3de10295f662b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago      Exited              kube-controller-manager   2                   cf34e98c74211       kube-controller-manager-kubernetes-upgrade-370171
	9a3e44dfb04b3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago      Exited              kube-apiserver            2                   74c21a8e2f391       kube-apiserver-kubernetes-upgrade-370171
	4edb2fc0f00c5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   22 seconds ago      Exited              kube-scheduler            1                   a45e7129453dc       kube-scheduler-kubernetes-upgrade-370171
	39e71211e0831       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   57 seconds ago      Exited              coredns                   0                   d7d8da033db7a       coredns-7c65d6cfc9-8cnv7
	dc5e9a4e1b08c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   57 seconds ago      Exited              kube-proxy                0                   21e5bd29ad27d       kube-proxy-nnghj
	
	
	==> coredns [39e71211e0831360f1c48c71704bda7651789350c372835fc726cf552a325e6e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7b29252d989927b7a805f289c51ddca274c41dec404edee1c64455646a4cfec8] <==
	
	
	==> coredns [a007b129e073d7218ebb408eef26f5655fa9fef31d1c55d8a016872b12da6076] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d7a3fabafc883426d9e39225194479824fde023a4f30fac10c2bfcb7d390e675] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2137015064]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (11-Oct-2024 22:11:33.537) (total time: 10001ms):
	Trace[2137015064]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:11:43.539)
	Trace[2137015064]: [10.001811648s] [10.001811648s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[740515804]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (11-Oct-2024 22:11:33.538) (total time: 10001ms):
	Trace[740515804]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:11:43.539)
	Trace[740515804]: [10.001829667s] [10.001829667s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[467138796]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (11-Oct-2024 22:11:33.535) (total time: 10004ms):
	Trace[467138796]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (22:11:43.539)
	Trace[467138796]: [10.004174061s] [10.004174061s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-370171
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-370171
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:10:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-370171
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:11:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:11:50 +0000   Fri, 11 Oct 2024 22:10:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:11:50 +0000   Fri, 11 Oct 2024 22:10:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:11:50 +0000   Fri, 11 Oct 2024 22:10:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:11:50 +0000   Fri, 11 Oct 2024 22:10:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.235
	  Hostname:    kubernetes-upgrade-370171
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b34d98dd74e411486d5f236a306edd5
	  System UUID:                6b34d98d-d74e-4114-86d5-f236a306edd5
	  Boot ID:                    ea947e5b-ec43-4d7a-b09c-89a52468d522
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8cnv7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     60s
	  kube-system                 coredns-7c65d6cfc9-cmnln                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     60s
	  kube-system                 kube-apiserver-kubernetes-upgrade-370171             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-370171    200m (10%)    0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-proxy-nnghj                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             140Mi (6%)  340Mi (16%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 4s                   kube-proxy       
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    104s (x8 over 105s)  kubelet          Node kubernetes-upgrade-370171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 105s)  kubelet          Node kubernetes-upgrade-370171 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  104s (x8 over 105s)  kubelet          Node kubernetes-upgrade-370171 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           71s                  node-controller  Node kubernetes-upgrade-370171 event: Registered Node kubernetes-upgrade-370171 in Controller
	  Normal  Starting                 8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)      kubelet          Node kubernetes-upgrade-370171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)      kubelet          Node kubernetes-upgrade-370171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)      kubelet          Node kubernetes-upgrade-370171 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node kubernetes-upgrade-370171 event: Registered Node kubernetes-upgrade-370171 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct11 22:10] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.064980] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059379] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.177913] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.144163] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.316231] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +4.382279] systemd-fstab-generator[726]: Ignoring "noauto" option for root device
	[  +0.063224] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.269920] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[ +14.450089] kauditd_printk_skb: 87 callbacks suppressed
	[ +30.882264] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +0.117852] kauditd_printk_skb: 10 callbacks suppressed
	[Oct11 22:11] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +0.110872] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.062421] systemd-fstab-generator[2314]: Ignoring "noauto" option for root device
	[  +0.202231] systemd-fstab-generator[2328]: Ignoring "noauto" option for root device
	[  +0.163120] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.613591] systemd-fstab-generator[2517]: Ignoring "noauto" option for root device
	[  +5.339562] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.097258] kauditd_printk_skb: 153 callbacks suppressed
	[  +6.911409] kauditd_printk_skb: 99 callbacks suppressed
	[  +8.130908] systemd-fstab-generator[3805]: Ignoring "noauto" option for root device
	[  +4.229520] kauditd_printk_skb: 45 callbacks suppressed
	[  +1.523218] systemd-fstab-generator[4216]: Ignoring "noauto" option for root device
	
	
	==> etcd [25de8df6051478b4a4955f2b60b7e7a7aa11bc1f03b4c70fa230d2ac8e479896] <==
	{"level":"info","ts":"2024-10-11T22:11:32.662963Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-11T22:11:32.736181Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","commit-index":404}
	{"level":"info","ts":"2024-10-11T22:11:32.736374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-11T22:11:32.736476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became follower at term 2"}
	{"level":"info","ts":"2024-10-11T22:11:32.736538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5c9ce5d2cd86398f [peers: [], term: 2, commit: 404, applied: 0, lastindex: 404, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-11T22:11:32.767782Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-11T22:11:32.827332Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":390}
	{"level":"info","ts":"2024-10-11T22:11:32.836660Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-11T22:11:32.852967Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"5c9ce5d2cd86398f","timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:11:32.853254Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"5c9ce5d2cd86398f"}
	{"level":"info","ts":"2024-10-11T22:11:32.853290Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"5c9ce5d2cd86398f","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-11T22:11:32.856614Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-11T22:11:32.856670Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-11T22:11:32.856688Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-11T22:11:32.859889Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-11T22:11:32.860221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=(6673461441410251151)"}
	{"level":"info","ts":"2024-10-11T22:11:32.860285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","added-peer-id":"5c9ce5d2cd86398f","added-peer-peer-urls":["https://192.168.61.235:2380"]}
	{"level":"info","ts":"2024-10-11T22:11:32.860376Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:11:32.860448Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:11:32.871967Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:11:32.892017Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T22:11:32.892235Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:11:32.892256Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:11:32.892322Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-10-11T22:11:32.892327Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.235:2380"}
	
	
	==> etcd [7bd27f2fa58b7ddde60712d0a5d2b948fda070ddcba5cb53f6ecd7b833bdbed2] <==
	{"level":"info","ts":"2024-10-11T22:11:43.697012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=(6673461441410251151)"}
	{"level":"info","ts":"2024-10-11T22:11:43.697091Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","added-peer-id":"5c9ce5d2cd86398f","added-peer-peer-urls":["https://192.168.61.235:2380"]}
	{"level":"info","ts":"2024-10-11T22:11:43.697228Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:11:43.697275Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:11:43.699790Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T22:11:43.700042Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:11:43.700080Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:11:43.700263Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-10-11T22:11:43.700288Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-10-11T22:11:44.683357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-11T22:11:44.683446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:11:44.683479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgPreVoteResp from 5c9ce5d2cd86398f at term 2"}
	{"level":"info","ts":"2024-10-11T22:11:44.683493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became candidate at term 3"}
	{"level":"info","ts":"2024-10-11T22:11:44.683499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgVoteResp from 5c9ce5d2cd86398f at term 3"}
	{"level":"info","ts":"2024-10-11T22:11:44.683508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became leader at term 3"}
	{"level":"info","ts":"2024-10-11T22:11:44.683515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5c9ce5d2cd86398f elected leader 5c9ce5d2cd86398f at term 3"}
	{"level":"info","ts":"2024-10-11T22:11:44.685067Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5c9ce5d2cd86398f","local-member-attributes":"{Name:kubernetes-upgrade-370171 ClientURLs:[https://192.168.61.235:2379]}","request-path":"/0/members/5c9ce5d2cd86398f/attributes","cluster-id":"d507c5522fd9f0c3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:11:44.685156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:11:44.685282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:11:44.686325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:11:44.686568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:11:44.687207Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:11:44.687267Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:11:44.687276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:11:44.687814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.235:2379"}
	
	
	==> kernel <==
	 22:11:54 up 2 min,  0 users,  load average: 1.38, 0.46, 0.17
	Linux kubernetes-upgrade-370171 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [86bd24f84c99d5288d6fe9985e5fcbf560805cbddc77241a88439cf0526f9d52] <==
	I1011 22:11:50.090839       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1011 22:11:50.096778       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1011 22:11:50.099089       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1011 22:11:50.099173       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1011 22:11:50.099350       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1011 22:11:50.099544       1 aggregator.go:171] initial CRD sync complete...
	I1011 22:11:50.099583       1 autoregister_controller.go:144] Starting autoregister controller
	I1011 22:11:50.099605       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1011 22:11:50.099627       1 cache.go:39] Caches are synced for autoregister controller
	I1011 22:11:50.125121       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1011 22:11:50.125339       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1011 22:11:50.128854       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1011 22:11:50.133687       1 shared_informer.go:320] Caches are synced for configmaps
	I1011 22:11:50.144769       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1011 22:11:50.157077       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1011 22:11:50.157214       1 policy_source.go:224] refreshing policies
	I1011 22:11:50.196228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1011 22:11:50.892232       1 controller.go:615] quota admission added evaluator for: endpoints
	I1011 22:11:51.028450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1011 22:11:51.609348       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1011 22:11:51.628044       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1011 22:11:51.675323       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1011 22:11:51.776940       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 22:11:51.784897       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1011 22:11:53.790058       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9a3e44dfb04b3b7499c5dd79fc79cacae6744345dffc5ed02314cc1abc9a0436] <==
	I1011 22:11:32.837645       1 options.go:228] external host was not specified, using 192.168.61.235
	I1011 22:11:32.842812       1 server.go:142] Version: v1.31.1
	I1011 22:11:32.842897       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:11:34.066299       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W1011 22:11:34.068349       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:34.068557       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1011 22:11:34.074182       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1011 22:11:34.077776       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1011 22:11:34.077813       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1011 22:11:34.078003       1 instance.go:232] Using reconciler: lease
	W1011 22:11:34.080524       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:35.069383       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:35.069564       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:35.081916       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:36.451898       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:36.877639       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:36.960295       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:39.283046       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:39.519073       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:39.699646       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:11:42.828636       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3de10295f662b71f7a2a49be2671218b35d5dd6501bf4b43169d3d69ac99ef4d] <==
	
	
	==> kube-controller-manager [83d5724abacf3ffc8ec79479391e80bf209ff0ca2cb7dac87181af4261ed48c7] <==
	I1011 22:11:53.510732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.156µs"
	I1011 22:11:53.515191       1 shared_informer.go:320] Caches are synced for resource quota
	I1011 22:11:53.549654       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"kubernetes-upgrade-370171\" does not exist"
	I1011 22:11:53.576124       1 shared_informer.go:320] Caches are synced for TTL
	I1011 22:11:53.582591       1 shared_informer.go:320] Caches are synced for taint
	I1011 22:11:53.583011       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1011 22:11:53.583607       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-370171"
	I1011 22:11:53.583726       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1011 22:11:53.588283       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1011 22:11:53.606034       1 shared_informer.go:320] Caches are synced for cronjob
	I1011 22:11:53.616205       1 shared_informer.go:320] Caches are synced for attach detach
	I1011 22:11:53.638474       1 shared_informer.go:320] Caches are synced for node
	I1011 22:11:53.638531       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1011 22:11:53.638558       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1011 22:11:53.638562       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1011 22:11:53.638567       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1011 22:11:53.638565       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1011 22:11:53.638628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-370171"
	I1011 22:11:53.638660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-370171"
	I1011 22:11:53.638695       1 shared_informer.go:320] Caches are synced for daemon sets
	I1011 22:11:53.639712       1 shared_informer.go:320] Caches are synced for persistent volume
	I1011 22:11:53.648169       1 shared_informer.go:320] Caches are synced for GC
	I1011 22:11:54.039636       1 shared_informer.go:320] Caches are synced for garbage collector
	I1011 22:11:54.086537       1 shared_informer.go:320] Caches are synced for garbage collector
	I1011 22:11:54.086563       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [dc5e9a4e1b08cfe1056c0d0c633743b508eb36f1ec082704dc2ab821725c3ba0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:10:56.446349       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:10:56.460913       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.235"]
	E1011 22:10:56.460990       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:10:56.502758       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:10:56.502841       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:10:56.502867       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:10:56.505276       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:10:56.505685       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:10:56.505710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:10:56.508321       1 config.go:199] "Starting service config controller"
	I1011 22:10:56.508368       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:10:56.508456       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:10:56.508484       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:10:56.508977       1 config.go:328] "Starting node config controller"
	I1011 22:10:56.509003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:10:56.609219       1 shared_informer.go:320] Caches are synced for node config
	I1011 22:10:56.609262       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:10:56.609283       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f0400242c4b17068a54f29fd2564e616dc85bcf28e2c8be9e24e8d59c81c499b] <==
	 >
	E1011 22:11:38.222833       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:11:44.549559       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-370171\": dial tcp 192.168.61.235:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.235:41122->192.168.61.235:8443: read: connection reset by peer"
	E1011 22:11:45.722895       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-370171\": dial tcp 192.168.61.235:8443: connect: connection refused"
	I1011 22:11:50.110881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.235"]
	E1011 22:11:50.110967       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:11:50.177775       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:11:50.177815       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:11:50.177837       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:11:50.180753       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:11:50.181237       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:11:50.181712       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:11:50.183734       1 config.go:199] "Starting service config controller"
	I1011 22:11:50.183910       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:11:50.183989       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:11:50.184070       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:11:50.186857       1 config.go:328] "Starting node config controller"
	I1011 22:11:50.186888       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:11:50.284934       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 22:11:50.285050       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:11:50.287086       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [180dbcfe0d403bb4f63b69814979c21ffa98601945cbe5fae1d9dba020ad288c] <==
	I1011 22:11:48.183278       1 serving.go:386] Generated self-signed cert in-memory
	W1011 22:11:50.017102       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 22:11:50.017203       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 22:11:50.017215       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 22:11:50.017221       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 22:11:50.112030       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1011 22:11:50.112085       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:11:50.116923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 22:11:50.116969       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 22:11:50.120277       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1011 22:11:50.120470       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1011 22:11:50.217597       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4edb2fc0f00c5a67cfc6adb9f9d19866363639acbb6365090128f66b50a87012] <==
	I1011 22:11:33.827004       1 serving.go:386] Generated self-signed cert in-memory
	W1011 22:11:44.546679       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.61.235:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.61.235:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.235:59862->192.168.61.235:8443: read: connection reset by peer
	W1011 22:11:44.546727       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 22:11:44.546740       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 22:11:44.564494       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1011 22:11:44.564564       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1011 22:11:44.564591       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1011 22:11:44.567647       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 22:11:44.567693       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1011 22:11:44.567732       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I1011 22:11:44.567806       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1011 22:11:44.567901       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E1011 22:11:44.568118       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 11 22:11:46 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:46.629262    3812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/98620440b2ecba8eb4ae614ca36c3683-etcd-data\") pod \"etcd-kubernetes-upgrade-370171\" (UID: \"98620440b2ecba8eb4ae614ca36c3683\") " pod="kube-system/etcd-kubernetes-upgrade-370171"
	Oct 11 22:11:46 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:46.806744    3812 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-370171"
	Oct 11 22:11:46 kubernetes-upgrade-370171 kubelet[3812]: E1011 22:11:46.807688    3812 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.235:8443: connect: connection refused" node="kubernetes-upgrade-370171"
	Oct 11 22:11:46 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:46.820126    3812 scope.go:117] "RemoveContainer" containerID="9a3e44dfb04b3b7499c5dd79fc79cacae6744345dffc5ed02314cc1abc9a0436"
	Oct 11 22:11:46 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:46.828886    3812 scope.go:117] "RemoveContainer" containerID="4edb2fc0f00c5a67cfc6adb9f9d19866363639acbb6365090128f66b50a87012"
	Oct 11 22:11:46 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:46.834618    3812 scope.go:117] "RemoveContainer" containerID="3de10295f662b71f7a2a49be2671218b35d5dd6501bf4b43169d3d69ac99ef4d"
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: E1011 22:11:47.010166    3812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-370171?timeout=10s\": dial tcp 192.168.61.235:8443: connect: connection refused" interval="800ms"
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:47.209980    3812 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-370171"
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: E1011 22:11:47.211501    3812 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.235:8443: connect: connection refused" node="kubernetes-upgrade-370171"
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: W1011 22:11:47.258891    3812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: E1011 22:11:47.258963    3812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.235:8443: connect: connection refused" logger="UnhandledError"
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: W1011 22:11:47.268778    3812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Oct 11 22:11:47 kubernetes-upgrade-370171 kubelet[3812]: E1011 22:11:47.268834    3812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.235:8443: connect: connection refused" logger="UnhandledError"
	Oct 11 22:11:48 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:48.018346    3812 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-370171"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.228979    3812 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-370171"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.229363    3812 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-370171"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.229511    3812 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.234675    3812 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.380922    3812 apiserver.go:52] "Watching apiserver"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.405173    3812 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.452965    3812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdd3cac6-023d-493f-b2d0-eb014ab9df37-xtables-lock\") pod \"kube-proxy-nnghj\" (UID: \"fdd3cac6-023d-493f-b2d0-eb014ab9df37\") " pod="kube-system/kube-proxy-nnghj"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.453161    3812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdd3cac6-023d-493f-b2d0-eb014ab9df37-lib-modules\") pod \"kube-proxy-nnghj\" (UID: \"fdd3cac6-023d-493f-b2d0-eb014ab9df37\") " pod="kube-system/kube-proxy-nnghj"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.453240    3812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c9808131-f2d9-4e98-bda1-2ed739a56d85-tmp\") pod \"storage-provisioner\" (UID: \"c9808131-f2d9-4e98-bda1-2ed739a56d85\") " pod="kube-system/storage-provisioner"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.687682    3812 scope.go:117] "RemoveContainer" containerID="cb9bc9942bca39b5fe41d4644d0587b8397732b6332a3b53c62987ace39dbfe6"
	Oct 11 22:11:50 kubernetes-upgrade-370171 kubelet[3812]: I1011 22:11:50.688044    3812 scope.go:117] "RemoveContainer" containerID="7b29252d989927b7a805f289c51ddca274c41dec404edee1c64455646a4cfec8"
	
	
	==> storage-provisioner [1a1ff977e0c1ea522790c0d785fa92c13b2d80a3785373c51e9aeeb5c70a41f5] <==
	I1011 22:11:50.868497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:11:50.883750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:11:50.885083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:11:50.898523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:11:50.898688       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-370171_98ba57e0-8ff6-4dac-a0e7-4eabe16748b5!
	I1011 22:11:50.899628       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7789593b-0608-44cc-9cb1-68adba33570d", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-370171_98ba57e0-8ff6-4dac-a0e7-4eabe16748b5 became leader
	I1011 22:11:50.999323       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-370171_98ba57e0-8ff6-4dac-a0e7-4eabe16748b5!
	
	
	==> storage-provisioner [cb9bc9942bca39b5fe41d4644d0587b8397732b6332a3b53c62987ace39dbfe6] <==
	I1011 22:11:33.441133       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1011 22:11:43.492486       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-370171 -n kubernetes-upgrade-370171
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-370171 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-kubernetes-upgrade-370171
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-370171 describe pod etcd-kubernetes-upgrade-370171
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-370171 describe pod etcd-kubernetes-upgrade-370171: exit status 1 (93.81807ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-kubernetes-upgrade-370171" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-370171 describe pod etcd-kubernetes-upgrade-370171: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-370171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-370171
I1011 22:11:55.704473   18814 config.go:182] Loaded profile config "auto-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-370171: (1.131014263s)
--- FAIL: TestKubernetesUpgrade (523.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (287.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-323416 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-323416 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m47.51637543s)

                                                
                                                
-- stdout --
	* [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:14:00.788394   70125 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:14:00.788537   70125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:14:00.788549   70125 out.go:358] Setting ErrFile to fd 2...
	I1011 22:14:00.788554   70125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:14:00.788824   70125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:14:00.789496   70125 out.go:352] Setting JSON to false
	I1011 22:14:00.790582   70125 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6986,"bootTime":1728677855,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:14:00.790711   70125 start.go:139] virtualization: kvm guest
	I1011 22:14:00.793022   70125 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:14:00.794548   70125 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:14:00.794673   70125 notify.go:220] Checking for updates...
	I1011 22:14:00.797288   70125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:14:00.798873   70125 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:14:00.800255   70125 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:14:00.801622   70125 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:14:00.802902   70125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:14:00.804720   70125 config.go:182] Loaded profile config "bridge-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:14:00.804838   70125 config.go:182] Loaded profile config "enable-default-cni-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:14:00.804934   70125 config.go:182] Loaded profile config "flannel-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:14:00.805035   70125 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:14:00.851185   70125 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 22:14:00.852625   70125 start.go:297] selected driver: kvm2
	I1011 22:14:00.852646   70125 start.go:901] validating driver "kvm2" against <nil>
	I1011 22:14:00.852663   70125 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:14:00.853818   70125 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:14:00.853947   70125 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:14:00.869697   70125 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:14:00.869753   70125 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 22:14:00.870026   70125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:14:00.870063   70125 cni.go:84] Creating CNI manager for ""
	I1011 22:14:00.870104   70125 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:14:00.870116   70125 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 22:14:00.870166   70125 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:14:00.870273   70125 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:14:00.873171   70125 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:14:00.874497   70125 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:14:00.874545   70125 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:14:00.874558   70125 cache.go:56] Caching tarball of preloaded images
	I1011 22:14:00.874696   70125 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:14:00.874712   70125 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:14:00.874834   70125 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:14:00.874862   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json: {Name:mk907dd94f81c1efde64c1dd50285cb178bff70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:00.875029   70125 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:14:12.139705   70125 start.go:364] duration metric: took 11.264635592s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:14:12.139768   70125 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:14:12.139951   70125 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 22:14:12.325178   70125 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 22:14:12.325454   70125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:14:12.325511   70125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:14:12.340647   70125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I1011 22:14:12.341219   70125 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:14:12.341781   70125 main.go:141] libmachine: Using API Version  1
	I1011 22:14:12.341803   70125 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:14:12.342113   70125 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:14:12.342301   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:14:12.342438   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:12.342636   70125 start.go:159] libmachine.API.Create for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:14:12.342676   70125 client.go:168] LocalClient.Create starting
	I1011 22:14:12.342714   70125 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 22:14:12.342753   70125 main.go:141] libmachine: Decoding PEM data...
	I1011 22:14:12.342769   70125 main.go:141] libmachine: Parsing certificate...
	I1011 22:14:12.342814   70125 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 22:14:12.342836   70125 main.go:141] libmachine: Decoding PEM data...
	I1011 22:14:12.342846   70125 main.go:141] libmachine: Parsing certificate...
	I1011 22:14:12.342859   70125 main.go:141] libmachine: Running pre-create checks...
	I1011 22:14:12.342867   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .PreCreateCheck
	I1011 22:14:12.343166   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:14:12.424561   70125 main.go:141] libmachine: Creating machine...
	I1011 22:14:12.424585   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .Create
	I1011 22:14:12.424859   70125 main.go:141] libmachine: (old-k8s-version-323416) Creating KVM machine...
	I1011 22:14:12.426359   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found existing default KVM network
	I1011 22:14:12.428185   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:12.427982   70316 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:ae:1a} reservation:<nil>}
	I1011 22:14:12.429439   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:12.429355   70316 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003621d0}
	I1011 22:14:12.429463   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | created network xml: 
	I1011 22:14:12.429473   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | <network>
	I1011 22:14:12.429478   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |   <name>mk-old-k8s-version-323416</name>
	I1011 22:14:12.429484   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |   <dns enable='no'/>
	I1011 22:14:12.429488   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |   
	I1011 22:14:12.429498   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1011 22:14:12.429505   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |     <dhcp>
	I1011 22:14:12.429517   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1011 22:14:12.429559   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |     </dhcp>
	I1011 22:14:12.429582   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |   </ip>
	I1011 22:14:12.429591   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG |   
	I1011 22:14:12.429600   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | </network>
	I1011 22:14:12.429611   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | 
	I1011 22:14:12.434764   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | trying to create private KVM network mk-old-k8s-version-323416 192.168.50.0/24...
	I1011 22:14:12.515696   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | private KVM network mk-old-k8s-version-323416 192.168.50.0/24 created
	I1011 22:14:12.515732   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:12.515656   70316 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:14:12.515746   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416 ...
	I1011 22:14:12.515763   70125 main.go:141] libmachine: (old-k8s-version-323416) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 22:14:12.515942   70125 main.go:141] libmachine: (old-k8s-version-323416) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 22:14:12.816102   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:12.815954   70316 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa...
	I1011 22:14:12.979563   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:12.979418   70316 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/old-k8s-version-323416.rawdisk...
	I1011 22:14:12.979593   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Writing magic tar header
	I1011 22:14:12.979623   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Writing SSH key tar header
	I1011 22:14:12.979642   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:12.979532   70316 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416 ...
	I1011 22:14:12.979664   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416
	I1011 22:14:12.979678   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 22:14:12.979704   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416 (perms=drwx------)
	I1011 22:14:12.979721   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:14:12.979742   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 22:14:12.979761   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 22:14:12.979774   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 22:14:12.979790   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 22:14:12.979807   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 22:14:12.979821   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 22:14:12.979830   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home/jenkins
	I1011 22:14:12.979838   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Checking permissions on dir: /home
	I1011 22:14:12.979850   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Skipping /home - not owner
	I1011 22:14:12.979859   70125 main.go:141] libmachine: (old-k8s-version-323416) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 22:14:12.979869   70125 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:14:12.981065   70125 main.go:141] libmachine: (old-k8s-version-323416) define libvirt domain using xml: 
	I1011 22:14:12.981093   70125 main.go:141] libmachine: (old-k8s-version-323416) <domain type='kvm'>
	I1011 22:14:12.981104   70125 main.go:141] libmachine: (old-k8s-version-323416)   <name>old-k8s-version-323416</name>
	I1011 22:14:12.981116   70125 main.go:141] libmachine: (old-k8s-version-323416)   <memory unit='MiB'>2200</memory>
	I1011 22:14:12.981123   70125 main.go:141] libmachine: (old-k8s-version-323416)   <vcpu>2</vcpu>
	I1011 22:14:12.981133   70125 main.go:141] libmachine: (old-k8s-version-323416)   <features>
	I1011 22:14:12.981163   70125 main.go:141] libmachine: (old-k8s-version-323416)     <acpi/>
	I1011 22:14:12.981186   70125 main.go:141] libmachine: (old-k8s-version-323416)     <apic/>
	I1011 22:14:12.981206   70125 main.go:141] libmachine: (old-k8s-version-323416)     <pae/>
	I1011 22:14:12.981222   70125 main.go:141] libmachine: (old-k8s-version-323416)     
	I1011 22:14:12.981238   70125 main.go:141] libmachine: (old-k8s-version-323416)   </features>
	I1011 22:14:12.981249   70125 main.go:141] libmachine: (old-k8s-version-323416)   <cpu mode='host-passthrough'>
	I1011 22:14:12.981261   70125 main.go:141] libmachine: (old-k8s-version-323416)   
	I1011 22:14:12.981269   70125 main.go:141] libmachine: (old-k8s-version-323416)   </cpu>
	I1011 22:14:12.981274   70125 main.go:141] libmachine: (old-k8s-version-323416)   <os>
	I1011 22:14:12.981280   70125 main.go:141] libmachine: (old-k8s-version-323416)     <type>hvm</type>
	I1011 22:14:12.981286   70125 main.go:141] libmachine: (old-k8s-version-323416)     <boot dev='cdrom'/>
	I1011 22:14:12.981299   70125 main.go:141] libmachine: (old-k8s-version-323416)     <boot dev='hd'/>
	I1011 22:14:12.981319   70125 main.go:141] libmachine: (old-k8s-version-323416)     <bootmenu enable='no'/>
	I1011 22:14:12.981329   70125 main.go:141] libmachine: (old-k8s-version-323416)   </os>
	I1011 22:14:12.981337   70125 main.go:141] libmachine: (old-k8s-version-323416)   <devices>
	I1011 22:14:12.981347   70125 main.go:141] libmachine: (old-k8s-version-323416)     <disk type='file' device='cdrom'>
	I1011 22:14:12.981360   70125 main.go:141] libmachine: (old-k8s-version-323416)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/boot2docker.iso'/>
	I1011 22:14:12.981368   70125 main.go:141] libmachine: (old-k8s-version-323416)       <target dev='hdc' bus='scsi'/>
	I1011 22:14:12.981375   70125 main.go:141] libmachine: (old-k8s-version-323416)       <readonly/>
	I1011 22:14:12.981383   70125 main.go:141] libmachine: (old-k8s-version-323416)     </disk>
	I1011 22:14:12.981417   70125 main.go:141] libmachine: (old-k8s-version-323416)     <disk type='file' device='disk'>
	I1011 22:14:12.981438   70125 main.go:141] libmachine: (old-k8s-version-323416)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 22:14:12.981454   70125 main.go:141] libmachine: (old-k8s-version-323416)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/old-k8s-version-323416.rawdisk'/>
	I1011 22:14:12.981460   70125 main.go:141] libmachine: (old-k8s-version-323416)       <target dev='hda' bus='virtio'/>
	I1011 22:14:12.981477   70125 main.go:141] libmachine: (old-k8s-version-323416)     </disk>
	I1011 22:14:12.981485   70125 main.go:141] libmachine: (old-k8s-version-323416)     <interface type='network'>
	I1011 22:14:12.981496   70125 main.go:141] libmachine: (old-k8s-version-323416)       <source network='mk-old-k8s-version-323416'/>
	I1011 22:14:12.981504   70125 main.go:141] libmachine: (old-k8s-version-323416)       <model type='virtio'/>
	I1011 22:14:12.981512   70125 main.go:141] libmachine: (old-k8s-version-323416)     </interface>
	I1011 22:14:12.981521   70125 main.go:141] libmachine: (old-k8s-version-323416)     <interface type='network'>
	I1011 22:14:12.981533   70125 main.go:141] libmachine: (old-k8s-version-323416)       <source network='default'/>
	I1011 22:14:12.981540   70125 main.go:141] libmachine: (old-k8s-version-323416)       <model type='virtio'/>
	I1011 22:14:12.981545   70125 main.go:141] libmachine: (old-k8s-version-323416)     </interface>
	I1011 22:14:12.981550   70125 main.go:141] libmachine: (old-k8s-version-323416)     <serial type='pty'>
	I1011 22:14:12.981558   70125 main.go:141] libmachine: (old-k8s-version-323416)       <target port='0'/>
	I1011 22:14:12.981568   70125 main.go:141] libmachine: (old-k8s-version-323416)     </serial>
	I1011 22:14:12.981577   70125 main.go:141] libmachine: (old-k8s-version-323416)     <console type='pty'>
	I1011 22:14:12.981588   70125 main.go:141] libmachine: (old-k8s-version-323416)       <target type='serial' port='0'/>
	I1011 22:14:12.981615   70125 main.go:141] libmachine: (old-k8s-version-323416)     </console>
	I1011 22:14:12.981642   70125 main.go:141] libmachine: (old-k8s-version-323416)     <rng model='virtio'>
	I1011 22:14:12.981665   70125 main.go:141] libmachine: (old-k8s-version-323416)       <backend model='random'>/dev/random</backend>
	I1011 22:14:12.981676   70125 main.go:141] libmachine: (old-k8s-version-323416)     </rng>
	I1011 22:14:12.981684   70125 main.go:141] libmachine: (old-k8s-version-323416)     
	I1011 22:14:12.981694   70125 main.go:141] libmachine: (old-k8s-version-323416)     
	I1011 22:14:12.981708   70125 main.go:141] libmachine: (old-k8s-version-323416)   </devices>
	I1011 22:14:12.981718   70125 main.go:141] libmachine: (old-k8s-version-323416) </domain>
	I1011 22:14:12.981730   70125 main.go:141] libmachine: (old-k8s-version-323416) 
	I1011 22:14:12.986040   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:38:b9:7a in network default
	I1011 22:14:12.986661   70125 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:14:12.986682   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:12.987324   70125 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:14:12.987707   70125 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:14:12.988397   70125 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:14:12.989259   70125 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:14:14.357126   70125 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:14:14.357885   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:14.358341   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:14.358374   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:14.358320   70316 retry.go:31] will retry after 284.899108ms: waiting for machine to come up
	I1011 22:14:14.645064   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:14.645694   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:14.645723   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:14.645650   70316 retry.go:31] will retry after 286.462529ms: waiting for machine to come up
	I1011 22:14:14.934697   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:14.935319   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:14.935407   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:14.935302   70316 retry.go:31] will retry after 379.531003ms: waiting for machine to come up
	I1011 22:14:15.317011   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:15.317719   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:15.317744   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:15.317665   70316 retry.go:31] will retry after 568.091664ms: waiting for machine to come up
	I1011 22:14:15.887412   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:15.887969   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:15.887996   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:15.887920   70316 retry.go:31] will retry after 537.100848ms: waiting for machine to come up
	I1011 22:14:16.426282   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:16.426848   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:16.426884   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:16.426810   70316 retry.go:31] will retry after 797.052996ms: waiting for machine to come up
	I1011 22:14:17.225506   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:17.226137   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:17.226162   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:17.226072   70316 retry.go:31] will retry after 827.926804ms: waiting for machine to come up
	I1011 22:14:18.055729   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:18.056252   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:18.056277   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:18.056231   70316 retry.go:31] will retry after 1.437420144s: waiting for machine to come up
	I1011 22:14:19.495941   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:19.496510   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:19.496536   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:19.496479   70316 retry.go:31] will retry after 1.401012554s: waiting for machine to come up
	I1011 22:14:20.899100   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:20.899732   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:20.899764   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:20.899652   70316 retry.go:31] will retry after 2.091272545s: waiting for machine to come up
	I1011 22:14:22.993115   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:22.993664   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:22.993689   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:22.993613   70316 retry.go:31] will retry after 2.7396598s: waiting for machine to come up
	I1011 22:14:25.736530   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:25.737089   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:25.737119   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:25.737036   70316 retry.go:31] will retry after 2.953076759s: waiting for machine to come up
	I1011 22:14:28.691844   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:28.692394   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:28.692424   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:28.692346   70316 retry.go:31] will retry after 4.143780965s: waiting for machine to come up
	I1011 22:14:32.837057   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:32.837426   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:14:32.837458   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:14:32.837364   70316 retry.go:31] will retry after 5.457243457s: waiting for machine to come up
	I1011 22:14:38.295769   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:38.296498   70125 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:14:38.296523   70125 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:14:38.296536   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:38.296820   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416
	I1011 22:14:38.374457   70125 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:14:38.374486   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:14:38.374496   70125 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:14:38.377116   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:38.377400   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416
	I1011 22:14:38.377427   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find defined IP address of network mk-old-k8s-version-323416 interface with MAC address 52:54:00:d4:30:4b
	I1011 22:14:38.377610   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:14:38.377639   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:14:38.377671   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:14:38.377688   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:14:38.377704   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:14:38.381389   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: exit status 255: 
	I1011 22:14:38.381413   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 22:14:38.381422   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | command : exit 0
	I1011 22:14:38.381430   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | err     : exit status 255
	I1011 22:14:38.381442   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | output  : 
	I1011 22:14:41.383520   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:14:41.386187   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.386557   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.386587   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.386722   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:14:41.386749   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:14:41.386780   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:14:41.386792   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:14:41.386806   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:14:41.510707   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:14:41.510917   70125 main.go:141] libmachine: (old-k8s-version-323416) KVM machine creation complete!
	I1011 22:14:41.511240   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:14:41.511798   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:41.511982   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:41.512127   70125 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 22:14:41.512143   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:14:41.513450   70125 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 22:14:41.513470   70125 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 22:14:41.513481   70125 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 22:14:41.513489   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:41.515853   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.516205   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.516231   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.516353   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:41.516522   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.516660   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.516789   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:41.516951   70125 main.go:141] libmachine: Using SSH client type: native
	I1011 22:14:41.517124   70125 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:14:41.517135   70125 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 22:14:41.621977   70125 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:14:41.622007   70125 main.go:141] libmachine: Detecting the provisioner...
	I1011 22:14:41.622020   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:41.624830   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.625154   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.625186   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.625389   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:41.625581   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.625759   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.625917   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:41.626038   70125 main.go:141] libmachine: Using SSH client type: native
	I1011 22:14:41.626205   70125 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:14:41.626216   70125 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 22:14:41.739191   70125 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 22:14:41.739254   70125 main.go:141] libmachine: found compatible host: buildroot
	I1011 22:14:41.739261   70125 main.go:141] libmachine: Provisioning with buildroot...
	I1011 22:14:41.739267   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:14:41.739494   70125 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:14:41.739523   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:14:41.739693   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:41.742391   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.742800   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.742830   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.742998   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:41.743187   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.743358   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.743506   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:41.743645   70125 main.go:141] libmachine: Using SSH client type: native
	I1011 22:14:41.743872   70125 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:14:41.743891   70125 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:14:41.869647   70125 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:14:41.869677   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:41.872280   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.872616   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.872646   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.872795   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:41.873107   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.873263   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:41.873388   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:41.873543   70125 main.go:141] libmachine: Using SSH client type: native
	I1011 22:14:41.873755   70125 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:14:41.873774   70125 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:14:41.987610   70125 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:14:41.987644   70125 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:14:41.987683   70125 buildroot.go:174] setting up certificates
	I1011 22:14:41.987696   70125 provision.go:84] configureAuth start
	I1011 22:14:41.987707   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:14:41.987956   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:14:41.990759   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.991238   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.991267   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.991446   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:41.993707   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.994012   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:41.994038   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:41.994116   70125 provision.go:143] copyHostCerts
	I1011 22:14:41.994181   70125 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:14:41.994201   70125 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:14:41.994270   70125 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:14:41.994378   70125 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:14:41.994390   70125 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:14:41.994426   70125 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:14:41.994499   70125 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:14:41.994510   70125 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:14:41.994537   70125 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:14:41.994596   70125 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:14:42.051438   70125 provision.go:177] copyRemoteCerts
	I1011 22:14:42.051503   70125 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:14:42.051538   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:42.054847   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.055284   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.055311   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.055529   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:42.055736   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.055884   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:42.055986   70125 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:14:42.140720   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:14:42.165320   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:14:42.188898   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:14:42.213694   70125 provision.go:87] duration metric: took 225.983091ms to configureAuth
	I1011 22:14:42.213722   70125 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:14:42.213916   70125 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:14:42.213997   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:42.216782   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.217147   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.217171   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.217342   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:42.217537   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.217722   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.217879   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:42.218038   70125 main.go:141] libmachine: Using SSH client type: native
	I1011 22:14:42.218251   70125 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:14:42.218269   70125 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:14:42.459578   70125 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:14:42.459607   70125 main.go:141] libmachine: Checking connection to Docker...
	I1011 22:14:42.459614   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetURL
	I1011 22:14:42.461057   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using libvirt version 6000000
	I1011 22:14:42.463091   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.463484   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.463513   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.463762   70125 main.go:141] libmachine: Docker is up and running!
	I1011 22:14:42.463776   70125 main.go:141] libmachine: Reticulating splines...
	I1011 22:14:42.463782   70125 client.go:171] duration metric: took 30.12109686s to LocalClient.Create
	I1011 22:14:42.463809   70125 start.go:167] duration metric: took 30.121195813s to libmachine.API.Create "old-k8s-version-323416"
	I1011 22:14:42.463821   70125 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:14:42.463830   70125 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:14:42.463845   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:42.464096   70125 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:14:42.464118   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:42.466302   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.466691   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.466715   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.466844   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:42.467002   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.467129   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:42.467246   70125 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:14:42.549717   70125 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:14:42.554293   70125 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:14:42.554332   70125 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:14:42.554388   70125 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:14:42.554454   70125 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:14:42.554544   70125 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:14:42.564402   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:14:42.588646   70125 start.go:296] duration metric: took 124.812825ms for postStartSetup
	I1011 22:14:42.588707   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:14:42.607142   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:14:42.609887   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.610235   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.610258   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.610562   70125 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:14:42.610823   70125 start.go:128] duration metric: took 30.470850967s to createHost
	I1011 22:14:42.610862   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:42.613076   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.613396   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.613430   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.613595   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:42.613762   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.613917   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.614014   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:42.614181   70125 main.go:141] libmachine: Using SSH client type: native
	I1011 22:14:42.614343   70125 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:14:42.614354   70125 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:14:42.719096   70125 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728684882.698955864
	
	I1011 22:14:42.719121   70125 fix.go:216] guest clock: 1728684882.698955864
	I1011 22:14:42.719130   70125 fix.go:229] Guest: 2024-10-11 22:14:42.698955864 +0000 UTC Remote: 2024-10-11 22:14:42.610847765 +0000 UTC m=+41.864554166 (delta=88.108099ms)
	I1011 22:14:42.719186   70125 fix.go:200] guest clock delta is within tolerance: 88.108099ms
	I1011 22:14:42.719193   70125 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 30.579460466s
	I1011 22:14:42.719222   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:42.719500   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:14:42.722582   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.722968   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.723007   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.723195   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:42.723687   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:42.723854   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:14:42.723929   70125 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:14:42.723984   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:42.724067   70125 ssh_runner.go:195] Run: cat /version.json
	I1011 22:14:42.724093   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:14:42.727939   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.728109   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.728396   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.728427   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.728668   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:42.728688   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:42.728731   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:42.728919   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.728942   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:14:42.729166   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:42.729186   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:14:42.729370   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:14:42.729382   70125 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:14:42.729498   70125 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:14:42.831743   70125 ssh_runner.go:195] Run: systemctl --version
	I1011 22:14:42.838176   70125 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:14:43.419808   70125 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:14:43.426547   70125 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:14:43.426634   70125 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:14:43.443675   70125 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:14:43.443698   70125 start.go:495] detecting cgroup driver to use...
	I1011 22:14:43.443757   70125 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:14:43.464016   70125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:14:43.481574   70125 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:14:43.481638   70125 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:14:43.495548   70125 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:14:43.511347   70125 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:14:43.634176   70125 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:14:43.782052   70125 docker.go:233] disabling docker service ...
	I1011 22:14:43.782111   70125 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:14:43.797094   70125 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:14:43.811035   70125 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:14:43.945396   70125 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:14:44.093625   70125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:14:44.107281   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:14:44.128291   70125 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:14:44.128345   70125 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:14:44.138543   70125 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:14:44.138637   70125 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:14:44.149190   70125 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:14:44.161538   70125 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:14:44.173271   70125 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:14:44.184749   70125 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:14:44.195733   70125 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:14:44.195790   70125 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:14:44.212764   70125 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:14:44.232063   70125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:14:44.373498   70125 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:14:44.480879   70125 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:14:44.480948   70125 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:14:44.485957   70125 start.go:563] Will wait 60s for crictl version
	I1011 22:14:44.486008   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:44.489945   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:14:44.526860   70125 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:14:44.526944   70125 ssh_runner.go:195] Run: crio --version
	I1011 22:14:44.557952   70125 ssh_runner.go:195] Run: crio --version
	I1011 22:14:44.593705   70125 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:14:44.595009   70125 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:14:44.598194   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:44.598595   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:14:28 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:14:44.598642   70125 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:14:44.598850   70125 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:14:44.603421   70125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:14:44.616598   70125 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:14:44.616703   70125 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:14:44.616755   70125 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:14:44.667508   70125 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:14:44.667588   70125 ssh_runner.go:195] Run: which lz4
	I1011 22:14:44.671883   70125 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:14:44.676190   70125 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:14:44.676217   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:14:46.366552   70125 crio.go:462] duration metric: took 1.694713107s to copy over tarball
	I1011 22:14:46.366654   70125 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:14:49.414983   70125 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.048294746s)
	I1011 22:14:49.415024   70125 crio.go:469] duration metric: took 3.048435217s to extract the tarball
	I1011 22:14:49.415035   70125 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:14:49.462595   70125 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:14:49.519881   70125 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:14:49.519906   70125 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:14:49.519981   70125 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:14:49.520225   70125 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:49.520362   70125 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:49.520394   70125 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:49.520503   70125 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:49.520552   70125 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:14:49.520590   70125 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:14:49.520499   70125 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:49.522516   70125 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:49.522558   70125 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:49.522603   70125 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:49.522646   70125 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:14:49.522678   70125 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:14:49.522534   70125 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:49.522983   70125 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:14:49.523157   70125 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:49.690290   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:49.694182   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:49.697908   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:49.711197   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:14:49.712617   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:49.712996   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:14:49.745898   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:49.798850   70125 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:14:49.798928   70125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:49.798980   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.798965   70125 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:14:49.799060   70125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:49.799092   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.829988   70125 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:14:49.830029   70125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:49.830075   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.880686   70125 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:14:49.880726   70125 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:14:49.880741   70125 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:14:49.880772   70125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:49.880777   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.880805   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.880874   70125 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:14:49.880890   70125 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:14:49.880910   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.891077   70125 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:14:49.891117   70125 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:49.891156   70125 ssh_runner.go:195] Run: which crictl
	I1011 22:14:49.891189   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:49.891262   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:49.891368   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:49.893989   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:49.894043   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:14:49.894076   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:14:50.012123   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:50.012221   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:50.043076   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:50.061653   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:50.061710   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:14:50.061657   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:50.061862   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:14:50.131521   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:50.131577   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:14:50.210259   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:14:50.210336   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:14:50.219201   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:14:50.242805   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:14:50.242942   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:14:50.310071   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:14:50.324142   70125 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:14:50.350792   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:14:50.350856   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:14:50.370542   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:14:50.390985   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:14:50.391019   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:14:50.414131   70125 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:14:50.781292   70125 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:14:50.943492   70125 cache_images.go:92] duration metric: took 1.423569061s to LoadCachedImages
	W1011 22:14:50.943556   70125 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:14:50.943568   70125 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:14:50.943669   70125 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:14:50.943775   70125 ssh_runner.go:195] Run: crio config
	I1011 22:14:51.002859   70125 cni.go:84] Creating CNI manager for ""
	I1011 22:14:51.002887   70125 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:14:51.002897   70125 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:14:51.002918   70125 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:14:51.003079   70125 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:14:51.003138   70125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:14:51.013931   70125 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:14:51.014000   70125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:14:51.023620   70125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:14:51.043216   70125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:14:51.062017   70125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:14:51.078751   70125 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:14:51.083022   70125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:14:51.096461   70125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:14:51.233971   70125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:14:51.253545   70125 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:14:51.253567   70125 certs.go:194] generating shared ca certs ...
	I1011 22:14:51.253586   70125 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.253746   70125 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:14:51.253801   70125 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:14:51.253815   70125 certs.go:256] generating profile certs ...
	I1011 22:14:51.253891   70125 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:14:51.253930   70125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.crt with IP's: []
	I1011 22:14:51.359780   70125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.crt ...
	I1011 22:14:51.359806   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.crt: {Name:mka6add2c8e93c37853a349ebbaf09a46faebc82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.359959   70125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key ...
	I1011 22:14:51.359973   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key: {Name:mk453d5216bfd5d2d3fefa06aa765f39db3ceb47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.360056   70125 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:14:51.360073   70125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt.7ceeacb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.223]
	I1011 22:14:51.531881   70125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt.7ceeacb9 ...
	I1011 22:14:51.531911   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt.7ceeacb9: {Name:mk4ea646f8de46fe9b9d1dadcdc6571401433131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.532136   70125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9 ...
	I1011 22:14:51.532159   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9: {Name:mkcd28eac7623d153c5ab208670c319400fa0f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.532296   70125 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt.7ceeacb9 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt
	I1011 22:14:51.532403   70125 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key
	I1011 22:14:51.532484   70125 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:14:51.532507   70125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt with IP's: []
	I1011 22:14:51.639774   70125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt ...
	I1011 22:14:51.639805   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt: {Name:mk9df0dda835c406a7da33f80f71f282a2c0e7ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.639986   70125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key ...
	I1011 22:14:51.640006   70125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key: {Name:mkd2a2fb0da5e7a2f0d27765cb5f9eaa96820875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:14:51.640263   70125 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:14:51.640301   70125 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:14:51.640312   70125 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:14:51.640331   70125 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:14:51.640352   70125 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:14:51.640373   70125 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:14:51.640414   70125 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:14:51.641022   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:14:51.672674   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:14:51.702858   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:14:51.733491   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:14:51.763088   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:14:51.793780   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:14:51.822794   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:14:51.854073   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:14:51.884561   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:14:51.914539   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:14:51.941495   70125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:14:51.966521   70125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:14:51.985430   70125 ssh_runner.go:195] Run: openssl version
	I1011 22:14:51.991797   70125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:14:52.004050   70125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:14:52.009252   70125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:14:52.009321   70125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:14:52.015455   70125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:14:52.026141   70125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:14:52.036914   70125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:14:52.041908   70125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:14:52.041965   70125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:14:52.048064   70125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:14:52.059152   70125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:14:52.069800   70125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:14:52.074373   70125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:14:52.074441   70125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:14:52.080680   70125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:14:52.091316   70125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:14:52.095753   70125 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 22:14:52.095819   70125 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:14:52.095905   70125 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:14:52.095955   70125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:14:52.140598   70125 cri.go:89] found id: ""
	I1011 22:14:52.140681   70125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:14:52.159852   70125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:14:52.175926   70125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:14:52.192436   70125 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:14:52.192456   70125 kubeadm.go:157] found existing configuration files:
	
	I1011 22:14:52.192495   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:14:52.203470   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:14:52.203525   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:14:52.230568   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:14:52.245983   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:14:52.246063   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:14:52.263513   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:14:52.273664   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:14:52.273725   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:14:52.287494   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:14:52.300427   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:14:52.300494   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:14:52.311300   70125 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:14:52.463633   70125 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:14:52.463741   70125 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:14:52.635659   70125 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:14:52.635784   70125 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:14:52.635950   70125 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:14:52.836967   70125 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:14:52.839679   70125 out.go:235]   - Generating certificates and keys ...
	I1011 22:14:52.839801   70125 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:14:52.839884   70125 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:14:53.076389   70125 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 22:14:53.162996   70125 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 22:14:53.331815   70125 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 22:14:53.416436   70125 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 22:14:53.576682   70125 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 22:14:53.577003   70125 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-323416] and IPs [192.168.50.223 127.0.0.1 ::1]
	I1011 22:14:53.677078   70125 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 22:14:53.677312   70125 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-323416] and IPs [192.168.50.223 127.0.0.1 ::1]
	I1011 22:14:53.775058   70125 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 22:14:53.941984   70125 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 22:14:54.281243   70125 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 22:14:54.281627   70125 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:14:54.339947   70125 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:14:54.512943   70125 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:14:55.067963   70125 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:14:55.196016   70125 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:14:55.213476   70125 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:14:55.214930   70125 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:14:55.215004   70125 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:14:55.375196   70125 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:14:55.377147   70125 out.go:235]   - Booting up control plane ...
	I1011 22:14:55.377296   70125 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:14:55.382730   70125 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:14:55.392607   70125 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:14:55.393821   70125 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:14:55.399757   70125 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:15:35.395709   70125 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:15:35.395959   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:15:35.396213   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:15:40.396660   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:15:40.396970   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:15:50.396447   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:15:50.396745   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:16:10.396565   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:16:10.396837   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:16:50.398018   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:16:50.398254   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:16:50.398267   70125 kubeadm.go:310] 
	I1011 22:16:50.398359   70125 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:16:50.398435   70125 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:16:50.398451   70125 kubeadm.go:310] 
	I1011 22:16:50.398510   70125 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:16:50.398602   70125 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:16:50.398767   70125 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:16:50.398778   70125 kubeadm.go:310] 
	I1011 22:16:50.398922   70125 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:16:50.398972   70125 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:16:50.399012   70125 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:16:50.399024   70125 kubeadm.go:310] 
	I1011 22:16:50.399191   70125 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:16:50.399300   70125 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:16:50.399311   70125 kubeadm.go:310] 
	I1011 22:16:50.399445   70125 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:16:50.399604   70125 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:16:50.399705   70125 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:16:50.399814   70125 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:16:50.399834   70125 kubeadm.go:310] 
	I1011 22:16:50.400180   70125 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:16:50.400293   70125 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:16:50.400406   70125 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:16:50.400564   70125 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-323416] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-323416] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-323416] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-323416] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:16:50.400610   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:16:51.579866   70125 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.179223808s)
	I1011 22:16:51.579950   70125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:16:51.593302   70125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:16:51.602628   70125 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:16:51.602650   70125 kubeadm.go:157] found existing configuration files:
	
	I1011 22:16:51.602695   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:16:51.611395   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:16:51.611453   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:16:51.620093   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:16:51.628540   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:16:51.628576   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:16:51.637486   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:16:51.645941   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:16:51.645983   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:16:51.654718   70125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:16:51.663346   70125 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:16:51.663397   70125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:16:51.672617   70125 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:16:51.741209   70125 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:16:51.741295   70125 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:16:51.894343   70125 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:16:51.894510   70125 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:16:51.894670   70125 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:16:52.085024   70125 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:16:52.087113   70125 out.go:235]   - Generating certificates and keys ...
	I1011 22:16:52.087216   70125 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:16:52.087277   70125 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:16:52.087381   70125 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:16:52.087479   70125 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:16:52.087577   70125 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:16:52.087657   70125 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:16:52.087764   70125 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:16:52.088115   70125 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:16:52.088417   70125 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:16:52.088822   70125 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:16:52.088883   70125 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:16:52.088971   70125 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:16:52.227322   70125 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:16:52.278560   70125 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:16:52.360970   70125 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:16:52.433029   70125 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:16:52.451834   70125 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:16:52.452921   70125 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:16:52.452970   70125 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:16:52.582446   70125 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:16:52.584345   70125 out.go:235]   - Booting up control plane ...
	I1011 22:16:52.584471   70125 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:16:52.592668   70125 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:16:52.593655   70125 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:16:52.594598   70125 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:16:52.605790   70125 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:17:32.608544   70125 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:17:32.608634   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:17:32.608826   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:17:37.609459   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:17:37.609708   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:17:47.609521   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:17:47.609780   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:18:07.609193   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:18:07.609457   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:18:47.609247   70125 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:18:47.609712   70125 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:18:47.609757   70125 kubeadm.go:310] 
	I1011 22:18:47.609838   70125 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:18:47.609919   70125 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:18:47.609940   70125 kubeadm.go:310] 
	I1011 22:18:47.610010   70125 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:18:47.610094   70125 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:18:47.610268   70125 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:18:47.610299   70125 kubeadm.go:310] 
	I1011 22:18:47.610424   70125 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:18:47.610463   70125 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:18:47.610496   70125 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:18:47.610502   70125 kubeadm.go:310] 
	I1011 22:18:47.610610   70125 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:18:47.610752   70125 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:18:47.610767   70125 kubeadm.go:310] 
	I1011 22:18:47.610928   70125 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:18:47.611068   70125 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:18:47.611186   70125 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:18:47.611291   70125 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:18:47.611303   70125 kubeadm.go:310] 
	I1011 22:18:47.611695   70125 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:18:47.611793   70125 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:18:47.611856   70125 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:18:47.611914   70125 kubeadm.go:394] duration metric: took 3m55.516108741s to StartCluster
	I1011 22:18:47.611954   70125 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:18:47.612004   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:18:47.655916   70125 cri.go:89] found id: ""
	I1011 22:18:47.655943   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.655955   70125 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:18:47.655963   70125 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:18:47.656017   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:18:47.694346   70125 cri.go:89] found id: ""
	I1011 22:18:47.694372   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.694380   70125 logs.go:284] No container was found matching "etcd"
	I1011 22:18:47.694386   70125 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:18:47.694466   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:18:47.729881   70125 cri.go:89] found id: ""
	I1011 22:18:47.729904   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.729912   70125 logs.go:284] No container was found matching "coredns"
	I1011 22:18:47.729919   70125 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:18:47.729980   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:18:47.763044   70125 cri.go:89] found id: ""
	I1011 22:18:47.763069   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.763077   70125 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:18:47.763083   70125 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:18:47.763140   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:18:47.797783   70125 cri.go:89] found id: ""
	I1011 22:18:47.797816   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.797824   70125 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:18:47.797829   70125 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:18:47.797883   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:18:47.830912   70125 cri.go:89] found id: ""
	I1011 22:18:47.830944   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.830955   70125 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:18:47.830962   70125 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:18:47.831023   70125 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:18:47.872332   70125 cri.go:89] found id: ""
	I1011 22:18:47.872361   70125 logs.go:282] 0 containers: []
	W1011 22:18:47.872371   70125 logs.go:284] No container was found matching "kindnet"
	I1011 22:18:47.872383   70125 logs.go:123] Gathering logs for dmesg ...
	I1011 22:18:47.872399   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:18:47.886942   70125 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:18:47.886971   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:18:48.043726   70125 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:18:48.043756   70125 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:18:48.043772   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:18:48.151819   70125 logs.go:123] Gathering logs for container status ...
	I1011 22:18:48.151860   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:18:48.195898   70125 logs.go:123] Gathering logs for kubelet ...
	I1011 22:18:48.195930   70125 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 22:18:48.246318   70125 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:18:48.246390   70125 out.go:270] * 
	* 
	W1011 22:18:48.246441   70125 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:18:48.246455   70125 out.go:270] * 
	* 
	W1011 22:18:48.247226   70125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:18:48.250671   70125 out.go:201] 
	W1011 22:18:48.252205   70125 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:18:48.252247   70125 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:18:48.252277   70125 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:18:48.253717   70125 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-323416 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 6 (217.953552ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:18:48.521772   77071 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-323416" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (287.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-390487 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-390487 --alsologtostderr -v=3: exit status 82 (2m0.502670463s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-390487"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:16:36.154089   75837 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:16:36.154241   75837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:16:36.154252   75837 out.go:358] Setting ErrFile to fd 2...
	I1011 22:16:36.154259   75837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:16:36.154541   75837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:16:36.154884   75837 out.go:352] Setting JSON to false
	I1011 22:16:36.154995   75837 mustload.go:65] Loading cluster: no-preload-390487
	I1011 22:16:36.155554   75837 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:16:36.155661   75837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:16:36.155916   75837 mustload.go:65] Loading cluster: no-preload-390487
	I1011 22:16:36.156097   75837 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:16:36.156142   75837 stop.go:39] StopHost: no-preload-390487
	I1011 22:16:36.156743   75837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:16:36.156810   75837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:16:36.171362   75837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I1011 22:16:36.171841   75837 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:16:36.172413   75837 main.go:141] libmachine: Using API Version  1
	I1011 22:16:36.172450   75837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:16:36.172754   75837 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:16:36.176075   75837 out.go:177] * Stopping node "no-preload-390487"  ...
	I1011 22:16:36.177305   75837 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 22:16:36.177332   75837 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:16:36.177525   75837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 22:16:36.177555   75837 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:16:36.180186   75837 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:16:36.180604   75837 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:15:00 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:16:36.180634   75837 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:16:36.180750   75837 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:16:36.180916   75837 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:16:36.181071   75837 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:16:36.181209   75837 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:16:36.272520   75837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 22:16:36.337075   75837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 22:16:36.406687   75837 main.go:141] libmachine: Stopping "no-preload-390487"...
	I1011 22:16:36.406737   75837 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:16:36.408445   75837 main.go:141] libmachine: (no-preload-390487) Calling .Stop
	I1011 22:16:36.412371   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 0/120
	I1011 22:16:37.413685   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 1/120
	I1011 22:16:38.415453   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 2/120
	I1011 22:16:39.416818   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 3/120
	I1011 22:16:40.418307   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 4/120
	I1011 22:16:41.420231   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 5/120
	I1011 22:16:42.421608   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 6/120
	I1011 22:16:43.423485   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 7/120
	I1011 22:16:44.425001   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 8/120
	I1011 22:16:45.427230   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 9/120
	I1011 22:16:46.428680   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 10/120
	I1011 22:16:47.430029   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 11/120
	I1011 22:16:48.431397   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 12/120
	I1011 22:16:49.433739   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 13/120
	I1011 22:16:50.436201   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 14/120
	I1011 22:16:51.438025   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 15/120
	I1011 22:16:52.439560   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 16/120
	I1011 22:16:53.441020   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 17/120
	I1011 22:16:54.442268   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 18/120
	I1011 22:16:55.443623   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 19/120
	I1011 22:16:56.445638   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 20/120
	I1011 22:16:57.446846   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 21/120
	I1011 22:16:58.449171   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 22/120
	I1011 22:16:59.450520   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 23/120
	I1011 22:17:00.451777   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 24/120
	I1011 22:17:01.453454   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 25/120
	I1011 22:17:02.454855   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 26/120
	I1011 22:17:03.455963   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 27/120
	I1011 22:17:04.458047   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 28/120
	I1011 22:17:05.459804   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 29/120
	I1011 22:17:06.461856   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 30/120
	I1011 22:17:07.463401   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 31/120
	I1011 22:17:08.464989   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 32/120
	I1011 22:17:09.466601   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 33/120
	I1011 22:17:10.467804   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 34/120
	I1011 22:17:11.469686   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 35/120
	I1011 22:17:12.471122   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 36/120
	I1011 22:17:13.472315   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 37/120
	I1011 22:17:14.473776   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 38/120
	I1011 22:17:15.475050   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 39/120
	I1011 22:17:16.477136   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 40/120
	I1011 22:17:17.478507   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 41/120
	I1011 22:17:18.479878   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 42/120
	I1011 22:17:19.481251   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 43/120
	I1011 22:17:20.482431   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 44/120
	I1011 22:17:21.484465   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 45/120
	I1011 22:17:22.485699   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 46/120
	I1011 22:17:23.487034   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 47/120
	I1011 22:17:24.488420   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 48/120
	I1011 22:17:25.489608   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 49/120
	I1011 22:17:26.491146   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 50/120
	I1011 22:17:27.493242   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 51/120
	I1011 22:17:28.494695   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 52/120
	I1011 22:17:29.495932   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 53/120
	I1011 22:17:30.497491   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 54/120
	I1011 22:17:31.499590   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 55/120
	I1011 22:17:32.501220   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 56/120
	I1011 22:17:33.502595   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 57/120
	I1011 22:17:34.503998   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 58/120
	I1011 22:17:35.505413   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 59/120
	I1011 22:17:36.507680   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 60/120
	I1011 22:17:37.509031   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 61/120
	I1011 22:17:38.510428   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 62/120
	I1011 22:17:39.511579   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 63/120
	I1011 22:17:40.513269   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 64/120
	I1011 22:17:41.515385   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 65/120
	I1011 22:17:42.516669   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 66/120
	I1011 22:17:43.518091   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 67/120
	I1011 22:17:44.519346   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 68/120
	I1011 22:17:45.520682   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 69/120
	I1011 22:17:46.522912   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 70/120
	I1011 22:17:47.524235   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 71/120
	I1011 22:17:48.525754   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 72/120
	I1011 22:17:49.527375   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 73/120
	I1011 22:17:50.528782   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 74/120
	I1011 22:17:51.530836   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 75/120
	I1011 22:17:52.532573   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 76/120
	I1011 22:17:53.534027   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 77/120
	I1011 22:17:54.535361   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 78/120
	I1011 22:17:55.536691   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 79/120
	I1011 22:17:56.538927   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 80/120
	I1011 22:17:57.541120   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 81/120
	I1011 22:17:58.542420   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 82/120
	I1011 22:17:59.543708   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 83/120
	I1011 22:18:00.545091   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 84/120
	I1011 22:18:01.546939   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 85/120
	I1011 22:18:02.549103   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 86/120
	I1011 22:18:03.550434   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 87/120
	I1011 22:18:04.551753   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 88/120
	I1011 22:18:05.553078   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 89/120
	I1011 22:18:06.555296   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 90/120
	I1011 22:18:07.556762   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 91/120
	I1011 22:18:08.558137   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 92/120
	I1011 22:18:09.559467   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 93/120
	I1011 22:18:10.560920   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 94/120
	I1011 22:18:11.562789   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 95/120
	I1011 22:18:12.564212   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 96/120
	I1011 22:18:13.565711   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 97/120
	I1011 22:18:14.567179   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 98/120
	I1011 22:18:15.568545   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 99/120
	I1011 22:18:16.570868   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 100/120
	I1011 22:18:17.572132   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 101/120
	I1011 22:18:18.573491   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 102/120
	I1011 22:18:19.574726   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 103/120
	I1011 22:18:20.576132   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 104/120
	I1011 22:18:21.578228   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 105/120
	I1011 22:18:22.579505   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 106/120
	I1011 22:18:23.581169   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 107/120
	I1011 22:18:24.582598   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 108/120
	I1011 22:18:25.584101   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 109/120
	I1011 22:18:26.586319   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 110/120
	I1011 22:18:27.587666   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 111/120
	I1011 22:18:28.589002   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 112/120
	I1011 22:18:29.590203   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 113/120
	I1011 22:18:30.591587   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 114/120
	I1011 22:18:31.593844   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 115/120
	I1011 22:18:32.595243   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 116/120
	I1011 22:18:33.596587   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 117/120
	I1011 22:18:34.597984   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 118/120
	I1011 22:18:35.599772   75837 main.go:141] libmachine: (no-preload-390487) Waiting for machine to stop 119/120
	I1011 22:18:36.601126   75837 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1011 22:18:36.601193   75837 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1011 22:18:36.603019   75837 out.go:201] 
	W1011 22:18:36.604419   75837 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1011 22:18:36.604434   75837 out.go:270] * 
	* 
	W1011 22:18:36.607173   75837 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:18:36.608664   75837 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-390487 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487
E1011 22:18:38.159415   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:43.281699   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487: exit status 3 (18.57312273s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:18:55.183020   76958 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host
	E1011 22:18:55.183042   76958 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-390487" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-223942 --alsologtostderr -v=3
E1011 22:16:55.953538   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:55.959917   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:55.971265   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:55.992588   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:56.033944   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:56.115345   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:56.276880   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:56.599039   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.240874   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.615338   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.621682   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.633015   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.654359   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.695713   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.777154   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:57.938674   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:58.260225   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:58.522533   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:16:58.902534   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:17:00.184791   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-223942 --alsologtostderr -v=3: exit status 82 (2m0.4973825s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-223942"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:16:46.011266   75936 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:16:46.011389   75936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:16:46.011400   75936 out.go:358] Setting ErrFile to fd 2...
	I1011 22:16:46.011404   75936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:16:46.011630   75936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:16:46.011894   75936 out.go:352] Setting JSON to false
	I1011 22:16:46.011983   75936 mustload.go:65] Loading cluster: embed-certs-223942
	I1011 22:16:46.012353   75936 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:16:46.012436   75936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:16:46.012630   75936 mustload.go:65] Loading cluster: embed-certs-223942
	I1011 22:16:46.012753   75936 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:16:46.012798   75936 stop.go:39] StopHost: embed-certs-223942
	I1011 22:16:46.013190   75936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:16:46.013239   75936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:16:46.028202   75936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I1011 22:16:46.028599   75936 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:16:46.029119   75936 main.go:141] libmachine: Using API Version  1
	I1011 22:16:46.029145   75936 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:16:46.029484   75936 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:16:46.032015   75936 out.go:177] * Stopping node "embed-certs-223942"  ...
	I1011 22:16:46.033381   75936 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 22:16:46.033413   75936 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:16:46.033604   75936 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 22:16:46.033635   75936 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:16:46.036447   75936 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:16:46.036863   75936 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:16:46.036913   75936 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:16:46.037028   75936 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:16:46.037196   75936 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:16:46.037367   75936 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:16:46.037521   75936 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:16:46.138336   75936 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 22:16:46.198577   75936 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 22:16:46.265870   75936 main.go:141] libmachine: Stopping "embed-certs-223942"...
	I1011 22:16:46.265929   75936 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:16:46.267529   75936 main.go:141] libmachine: (embed-certs-223942) Calling .Stop
	I1011 22:16:46.271011   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 0/120
	I1011 22:16:47.272600   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 1/120
	I1011 22:16:48.274355   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 2/120
	I1011 22:16:49.276335   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 3/120
	I1011 22:16:50.277803   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 4/120
	I1011 22:16:51.279720   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 5/120
	I1011 22:16:52.281169   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 6/120
	I1011 22:16:53.283262   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 7/120
	I1011 22:16:54.284657   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 8/120
	I1011 22:16:55.285857   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 9/120
	I1011 22:16:56.288055   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 10/120
	I1011 22:16:57.289471   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 11/120
	I1011 22:16:58.290918   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 12/120
	I1011 22:16:59.292062   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 13/120
	I1011 22:17:00.293387   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 14/120
	I1011 22:17:01.294966   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 15/120
	I1011 22:17:02.296108   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 16/120
	I1011 22:17:03.297495   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 17/120
	I1011 22:17:04.299294   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 18/120
	I1011 22:17:05.300616   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 19/120
	I1011 22:17:06.303281   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 20/120
	I1011 22:17:07.305118   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 21/120
	I1011 22:17:08.306447   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 22/120
	I1011 22:17:09.307877   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 23/120
	I1011 22:17:10.309327   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 24/120
	I1011 22:17:11.310927   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 25/120
	I1011 22:17:12.313163   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 26/120
	I1011 22:17:13.314328   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 27/120
	I1011 22:17:14.315641   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 28/120
	I1011 22:17:15.316950   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 29/120
	I1011 22:17:16.319193   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 30/120
	I1011 22:17:17.320359   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 31/120
	I1011 22:17:18.322043   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 32/120
	I1011 22:17:19.323384   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 33/120
	I1011 22:17:20.324818   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 34/120
	I1011 22:17:21.326914   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 35/120
	I1011 22:17:22.329259   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 36/120
	I1011 22:17:23.330652   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 37/120
	I1011 22:17:24.332007   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 38/120
	I1011 22:17:25.333144   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 39/120
	I1011 22:17:26.335060   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 40/120
	I1011 22:17:27.336354   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 41/120
	I1011 22:17:28.337692   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 42/120
	I1011 22:17:29.339127   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 43/120
	I1011 22:17:30.340415   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 44/120
	I1011 22:17:31.342014   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 45/120
	I1011 22:17:32.343378   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 46/120
	I1011 22:17:33.344742   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 47/120
	I1011 22:17:34.346119   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 48/120
	I1011 22:17:35.347550   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 49/120
	I1011 22:17:36.349497   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 50/120
	I1011 22:17:37.350950   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 51/120
	I1011 22:17:38.352228   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 52/120
	I1011 22:17:39.353314   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 53/120
	I1011 22:17:40.354780   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 54/120
	I1011 22:17:41.357080   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 55/120
	I1011 22:17:42.358510   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 56/120
	I1011 22:17:43.360139   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 57/120
	I1011 22:17:44.361583   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 58/120
	I1011 22:17:45.362955   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 59/120
	I1011 22:17:46.364928   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 60/120
	I1011 22:17:47.366241   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 61/120
	I1011 22:17:48.367502   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 62/120
	I1011 22:17:49.368729   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 63/120
	I1011 22:17:50.369986   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 64/120
	I1011 22:17:51.371917   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 65/120
	I1011 22:17:52.373230   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 66/120
	I1011 22:17:53.374403   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 67/120
	I1011 22:17:54.375781   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 68/120
	I1011 22:17:55.377084   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 69/120
	I1011 22:17:56.379181   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 70/120
	I1011 22:17:57.380305   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 71/120
	I1011 22:17:58.381569   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 72/120
	I1011 22:17:59.382834   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 73/120
	I1011 22:18:00.384000   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 74/120
	I1011 22:18:01.386002   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 75/120
	I1011 22:18:02.387324   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 76/120
	I1011 22:18:03.388481   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 77/120
	I1011 22:18:04.389886   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 78/120
	I1011 22:18:05.391259   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 79/120
	I1011 22:18:06.393084   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 80/120
	I1011 22:18:07.394691   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 81/120
	I1011 22:18:08.396091   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 82/120
	I1011 22:18:09.397366   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 83/120
	I1011 22:18:10.398737   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 84/120
	I1011 22:18:11.400463   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 85/120
	I1011 22:18:12.401870   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 86/120
	I1011 22:18:13.403135   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 87/120
	I1011 22:18:14.404536   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 88/120
	I1011 22:18:15.405749   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 89/120
	I1011 22:18:16.407889   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 90/120
	I1011 22:18:17.409237   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 91/120
	I1011 22:18:18.410816   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 92/120
	I1011 22:18:19.412376   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 93/120
	I1011 22:18:20.413614   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 94/120
	I1011 22:18:21.415555   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 95/120
	I1011 22:18:22.417170   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 96/120
	I1011 22:18:23.418665   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 97/120
	I1011 22:18:24.420117   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 98/120
	I1011 22:18:25.421428   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 99/120
	I1011 22:18:26.423702   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 100/120
	I1011 22:18:27.425003   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 101/120
	I1011 22:18:28.427161   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 102/120
	I1011 22:18:29.428373   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 103/120
	I1011 22:18:30.429822   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 104/120
	I1011 22:18:31.431973   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 105/120
	I1011 22:18:32.433440   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 106/120
	I1011 22:18:33.434875   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 107/120
	I1011 22:18:34.436606   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 108/120
	I1011 22:18:35.438109   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 109/120
	I1011 22:18:36.440775   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 110/120
	I1011 22:18:37.442366   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 111/120
	I1011 22:18:38.443860   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 112/120
	I1011 22:18:39.445523   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 113/120
	I1011 22:18:40.446884   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 114/120
	I1011 22:18:41.449005   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 115/120
	I1011 22:18:42.450269   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 116/120
	I1011 22:18:43.451808   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 117/120
	I1011 22:18:44.453089   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 118/120
	I1011 22:18:45.454485   75936 main.go:141] libmachine: (embed-certs-223942) Waiting for machine to stop 119/120
	I1011 22:18:46.455565   75936 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1011 22:18:46.455636   75936 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1011 22:18:46.457643   75936 out.go:201] 
	W1011 22:18:46.458949   75936 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1011 22:18:46.458965   75936 out.go:270] * 
	* 
	W1011 22:18:46.461389   75936 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:18:46.463062   75936 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-223942 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942: exit status 3 (18.446024045s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:04.911035   77023 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host
	E1011 22:19:04.911057   77023 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-223942" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-070708 --alsologtostderr -v=3
E1011 22:17:16.447869   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:17:18.109575   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:17:36.929352   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:17:38.591823   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.343216   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.349574   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.360901   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.382353   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.424031   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.505563   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.667338   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:10.989289   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:11.631133   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:12.912761   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:15.474047   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:17.890787   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:19.553794   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:20.595356   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:29.463716   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:30.837082   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.028976   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.035339   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.046675   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.068109   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.109542   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.190924   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.352452   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:33.673953   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:34.315971   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:35.597543   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-070708 --alsologtostderr -v=3: exit status 82 (2m0.494329893s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-070708"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:17:11.989356   76188 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:17:11.989460   76188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:17:11.989468   76188 out.go:358] Setting ErrFile to fd 2...
	I1011 22:17:11.989472   76188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:17:11.989646   76188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:17:11.989876   76188 out.go:352] Setting JSON to false
	I1011 22:17:11.989948   76188 mustload.go:65] Loading cluster: default-k8s-diff-port-070708
	I1011 22:17:11.990254   76188 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:17:11.990317   76188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:17:11.990475   76188 mustload.go:65] Loading cluster: default-k8s-diff-port-070708
	I1011 22:17:11.990568   76188 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:17:11.990598   76188 stop.go:39] StopHost: default-k8s-diff-port-070708
	I1011 22:17:11.990996   76188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:17:11.991040   76188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:17:12.005485   76188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1011 22:17:12.005945   76188 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:17:12.006588   76188 main.go:141] libmachine: Using API Version  1
	I1011 22:17:12.006626   76188 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:17:12.006958   76188 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:17:12.009675   76188 out.go:177] * Stopping node "default-k8s-diff-port-070708"  ...
	I1011 22:17:12.011526   76188 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1011 22:17:12.011552   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:17:12.011759   76188 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1011 22:17:12.011782   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:17:12.014506   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:17:12.014887   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:15:56 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:17:12.014909   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:17:12.014993   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:17:12.015156   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:17:12.015293   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:17:12.015411   76188 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:17:12.108582   76188 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1011 22:17:12.171550   76188 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1011 22:17:12.240015   76188 main.go:141] libmachine: Stopping "default-k8s-diff-port-070708"...
	I1011 22:17:12.240041   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:17:12.241719   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Stop
	I1011 22:17:12.245322   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 0/120
	I1011 22:17:13.246706   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 1/120
	I1011 22:17:14.248034   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 2/120
	I1011 22:17:15.249326   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 3/120
	I1011 22:17:16.250544   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 4/120
	I1011 22:17:17.252673   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 5/120
	I1011 22:17:18.254145   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 6/120
	I1011 22:17:19.255460   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 7/120
	I1011 22:17:20.256947   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 8/120
	I1011 22:17:21.258241   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 9/120
	I1011 22:17:22.259817   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 10/120
	I1011 22:17:23.261461   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 11/120
	I1011 22:17:24.263718   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 12/120
	I1011 22:17:25.265196   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 13/120
	I1011 22:17:26.266380   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 14/120
	I1011 22:17:27.268058   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 15/120
	I1011 22:17:28.269440   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 16/120
	I1011 22:17:29.270817   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 17/120
	I1011 22:17:30.272215   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 18/120
	I1011 22:17:31.273455   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 19/120
	I1011 22:17:32.275542   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 20/120
	I1011 22:17:33.276881   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 21/120
	I1011 22:17:34.278412   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 22/120
	I1011 22:17:35.279823   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 23/120
	I1011 22:17:36.281242   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 24/120
	I1011 22:17:37.283243   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 25/120
	I1011 22:17:38.284756   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 26/120
	I1011 22:17:39.285990   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 27/120
	I1011 22:17:40.287578   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 28/120
	I1011 22:17:41.288841   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 29/120
	I1011 22:17:42.291138   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 30/120
	I1011 22:17:43.292572   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 31/120
	I1011 22:17:44.293934   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 32/120
	I1011 22:17:45.295564   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 33/120
	I1011 22:17:46.296925   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 34/120
	I1011 22:17:47.298942   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 35/120
	I1011 22:17:48.300364   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 36/120
	I1011 22:17:49.301658   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 37/120
	I1011 22:17:50.303125   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 38/120
	I1011 22:17:51.304468   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 39/120
	I1011 22:17:52.306485   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 40/120
	I1011 22:17:53.307823   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 41/120
	I1011 22:17:54.309151   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 42/120
	I1011 22:17:55.310524   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 43/120
	I1011 22:17:56.311860   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 44/120
	I1011 22:17:57.313761   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 45/120
	I1011 22:17:58.315217   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 46/120
	I1011 22:17:59.316488   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 47/120
	I1011 22:18:00.317972   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 48/120
	I1011 22:18:01.319253   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 49/120
	I1011 22:18:02.320551   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 50/120
	I1011 22:18:03.322174   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 51/120
	I1011 22:18:04.323573   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 52/120
	I1011 22:18:05.325088   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 53/120
	I1011 22:18:06.326458   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 54/120
	I1011 22:18:07.328623   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 55/120
	I1011 22:18:08.330103   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 56/120
	I1011 22:18:09.331505   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 57/120
	I1011 22:18:10.332840   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 58/120
	I1011 22:18:11.334271   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 59/120
	I1011 22:18:12.336594   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 60/120
	I1011 22:18:13.338088   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 61/120
	I1011 22:18:14.339456   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 62/120
	I1011 22:18:15.340795   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 63/120
	I1011 22:18:16.342141   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 64/120
	I1011 22:18:17.344094   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 65/120
	I1011 22:18:18.345591   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 66/120
	I1011 22:18:19.347022   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 67/120
	I1011 22:18:20.348387   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 68/120
	I1011 22:18:21.349748   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 69/120
	I1011 22:18:22.351825   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 70/120
	I1011 22:18:23.353228   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 71/120
	I1011 22:18:24.354609   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 72/120
	I1011 22:18:25.355912   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 73/120
	I1011 22:18:26.357127   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 74/120
	I1011 22:18:27.359240   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 75/120
	I1011 22:18:28.360650   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 76/120
	I1011 22:18:29.361873   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 77/120
	I1011 22:18:30.363309   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 78/120
	I1011 22:18:31.364819   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 79/120
	I1011 22:18:32.367092   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 80/120
	I1011 22:18:33.368585   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 81/120
	I1011 22:18:34.369937   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 82/120
	I1011 22:18:35.371550   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 83/120
	I1011 22:18:36.372970   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 84/120
	I1011 22:18:37.375236   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 85/120
	I1011 22:18:38.376767   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 86/120
	I1011 22:18:39.378251   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 87/120
	I1011 22:18:40.379669   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 88/120
	I1011 22:18:41.381035   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 89/120
	I1011 22:18:42.383458   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 90/120
	I1011 22:18:43.384986   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 91/120
	I1011 22:18:44.386599   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 92/120
	I1011 22:18:45.389014   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 93/120
	I1011 22:18:46.390400   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 94/120
	I1011 22:18:47.392632   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 95/120
	I1011 22:18:48.393824   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 96/120
	I1011 22:18:49.395489   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 97/120
	I1011 22:18:50.396726   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 98/120
	I1011 22:18:51.398221   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 99/120
	I1011 22:18:52.400524   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 100/120
	I1011 22:18:53.401798   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 101/120
	I1011 22:18:54.403089   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 102/120
	I1011 22:18:55.404902   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 103/120
	I1011 22:18:56.406239   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 104/120
	I1011 22:18:57.408638   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 105/120
	I1011 22:18:58.410086   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 106/120
	I1011 22:18:59.411709   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 107/120
	I1011 22:19:00.413454   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 108/120
	I1011 22:19:01.415048   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 109/120
	I1011 22:19:02.417024   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 110/120
	I1011 22:19:03.418508   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 111/120
	I1011 22:19:04.419914   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 112/120
	I1011 22:19:05.421603   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 113/120
	I1011 22:19:06.423021   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 114/120
	I1011 22:19:07.425120   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 115/120
	I1011 22:19:08.426452   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 116/120
	I1011 22:19:09.427884   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 117/120
	I1011 22:19:10.429216   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 118/120
	I1011 22:19:11.430659   76188 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for machine to stop 119/120
	I1011 22:19:12.431669   76188 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1011 22:19:12.431749   76188 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1011 22:19:12.433431   76188 out.go:201] 
	W1011 22:19:12.434735   76188 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1011 22:19:12.434750   76188 out.go:270] * 
	* 
	W1011 22:19:12.437339   76188 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:19:12.438627   76188 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-070708 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
E1011 22:19:13.853352   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:14.004937   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708: exit status 3 (18.582499916s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:31.022969   77450 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1011 22:19:31.022989   77450 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-070708" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-323416 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-323416 create -f testdata/busybox.yaml: exit status 1 (43.004323ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-323416" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-323416 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 6 (215.251569ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:18:48.781581   77111 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-323416" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 6 (221.093511ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:18:49.001582   77141 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-323416" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-323416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1011 22:18:51.319087   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:18:53.523224   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-323416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m47.150875288s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-323416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-323416 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-323416 describe deploy/metrics-server -n kube-system: exit status 1 (42.831695ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-323416" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-323416 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 6 (217.716389ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:20:36.413836   77992 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-323416" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487: exit status 3 (3.167758119s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:18:58.350966   77217 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host
	E1011 22:18:58.350989   77217 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-390487 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-390487 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152817626s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-390487 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487: exit status 3 (3.062765216s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:07.567025   77297 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host
	E1011 22:19:07.567052   77297 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.55:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-390487" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942: exit status 3 (3.167825386s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:08.079056   77327 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host
	E1011 22:19:08.079080   77327 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1011 22:19:08.721119   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:08.727479   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:08.738914   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:08.760311   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:08.801788   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:08.883285   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:09.044866   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:09.366596   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:10.008906   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:11.291067   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155854667s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942: exit status 3 (3.059817409s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:17.294993   77480 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host
	E1011 22:19:17.295017   77480 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.238:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-223942" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
E1011 22:19:32.280386   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:33.319812   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708: exit status 3 (3.167416748s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:34.190927   77595 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1011 22:19:34.190952   77595 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-070708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1011 22:19:39.812185   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-070708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151880773s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-070708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
E1011 22:19:41.475793   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708: exit status 3 (3.063842677s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 22:19:43.407038   77695 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1011 22:19:43.407059   77695 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-070708" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (736.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-323416 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1011 22:20:45.005311   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:52.404596   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:54.202016   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:21:16.888616   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:21:33.366028   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:21:52.583512   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:21:55.953109   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:21:57.614732   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:22:06.383117   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:22:06.926839   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:22:23.654280   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:22:25.317750   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:22:55.287411   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:23:10.343113   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:23:33.029244   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:23:38.043420   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:24:00.730530   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:24:08.720689   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:24:23.067533   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:24:36.425085   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:24:50.769115   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:25:11.426561   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:25:24.492319   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:25:39.129072   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:26:47.561096   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:26:55.952541   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:26:57.614765   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:27:06.383129   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:28:10.343153   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:28:33.028685   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:29:08.720757   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-323416 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m13.199342887s)

                                                
                                                
-- stdout --
	* [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	* 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	* 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-323416 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (232.678689ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-323416 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-323416 logs -n 25: (1.517288785s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo cat                              | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:20:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:39.118864   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:20:45.198865   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:48.270849   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:54.350871   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:57.422868   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:03.502801   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:06.574950   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:12.654900   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:15.726940   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:21.806892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:24.878947   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:30.958903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:34.030961   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:40.110909   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:43.182869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:49.262857   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:52.334903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:58.414892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:01.486914   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:07.566885   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:10.638888   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:16.718908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:19.790874   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:25.870893   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:28.942886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:35.022875   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:38.094889   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:44.174898   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:47.246907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:53.326869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:56.398883   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:02.482839   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:05.550858   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:11.630908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:14.702895   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:20.782925   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:23.854907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:29.934886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:33.006820   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:39.086906   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:42.158938   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:45.162974   77526 start.go:364] duration metric: took 4m27.722613931s to acquireMachinesLock for "embed-certs-223942"
	I1011 22:23:45.163058   77526 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:23:45.163081   77526 fix.go:54] fixHost starting: 
	I1011 22:23:45.163410   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:23:45.163459   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:23:45.178675   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1011 22:23:45.179157   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:23:45.179600   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:23:45.179620   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:23:45.179959   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:23:45.180200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:23:45.180348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:23:45.182134   77526 fix.go:112] recreateIfNeeded on embed-certs-223942: state=Stopped err=<nil>
	I1011 22:23:45.182159   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	W1011 22:23:45.182305   77526 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:23:45.184160   77526 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223942" ...
	I1011 22:23:45.185640   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Start
	I1011 22:23:45.185844   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring networks are active...
	I1011 22:23:45.186700   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network default is active
	I1011 22:23:45.187125   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network mk-embed-certs-223942 is active
	I1011 22:23:45.187499   77526 main.go:141] libmachine: (embed-certs-223942) Getting domain xml...
	I1011 22:23:45.188220   77526 main.go:141] libmachine: (embed-certs-223942) Creating domain...
	I1011 22:23:46.400681   77526 main.go:141] libmachine: (embed-certs-223942) Waiting to get IP...
	I1011 22:23:46.401694   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.402146   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.402226   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.402142   78768 retry.go:31] will retry after 262.164449ms: waiting for machine to come up
	I1011 22:23:46.665716   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.666177   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.666204   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.666139   78768 retry.go:31] will retry after 264.99316ms: waiting for machine to come up
	I1011 22:23:46.932771   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.933128   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.933167   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.933084   78768 retry.go:31] will retry after 388.243159ms: waiting for machine to come up
	I1011 22:23:47.322648   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.323103   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.323165   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.323047   78768 retry.go:31] will retry after 374.999199ms: waiting for machine to come up
	I1011 22:23:45.160618   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:23:45.160654   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.160935   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:23:45.160960   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.161145   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:23:45.162838   77373 machine.go:96] duration metric: took 4m37.426000052s to provisionDockerMachine
	I1011 22:23:45.162876   77373 fix.go:56] duration metric: took 4m37.446804874s for fixHost
	I1011 22:23:45.162886   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 4m37.446840276s
	W1011 22:23:45.162906   77373 start.go:714] error starting host: provision: host is not running
	W1011 22:23:45.163008   77373 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1011 22:23:45.163018   77373 start.go:729] Will try again in 5 seconds ...
	I1011 22:23:47.699684   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.700088   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.700117   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.700031   78768 retry.go:31] will retry after 589.703952ms: waiting for machine to come up
	I1011 22:23:48.291928   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.292398   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.292422   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.292351   78768 retry.go:31] will retry after 671.971303ms: waiting for machine to come up
	I1011 22:23:48.966357   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.966772   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.966797   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.966738   78768 retry.go:31] will retry after 848.2726ms: waiting for machine to come up
	I1011 22:23:49.816735   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:49.817155   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:49.817181   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:49.817116   78768 retry.go:31] will retry after 941.163438ms: waiting for machine to come up
	I1011 22:23:50.759625   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:50.760052   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:50.760095   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:50.759996   78768 retry.go:31] will retry after 1.225047114s: waiting for machine to come up
	I1011 22:23:51.987349   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:51.987788   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:51.987817   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:51.987737   78768 retry.go:31] will retry after 2.184212352s: waiting for machine to come up
	I1011 22:23:50.165493   77373 start.go:360] acquireMachinesLock for no-preload-390487: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:23:54.173125   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:54.173564   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:54.173595   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:54.173503   78768 retry.go:31] will retry after 2.000096312s: waiting for machine to come up
	I1011 22:23:56.176004   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:56.176458   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:56.176488   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:56.176403   78768 retry.go:31] will retry after 3.062345768s: waiting for machine to come up
	I1011 22:23:59.239982   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:59.240426   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:59.240452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:59.240386   78768 retry.go:31] will retry after 4.019746049s: waiting for machine to come up
	I1011 22:24:04.643399   77741 start.go:364] duration metric: took 4m21.087318573s to acquireMachinesLock for "default-k8s-diff-port-070708"
	I1011 22:24:04.643463   77741 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:04.643471   77741 fix.go:54] fixHost starting: 
	I1011 22:24:04.643903   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:04.643950   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:04.660647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1011 22:24:04.661106   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:04.661603   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:24:04.661627   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:04.661966   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:04.662148   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:04.662392   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:24:04.664004   77741 fix.go:112] recreateIfNeeded on default-k8s-diff-port-070708: state=Stopped err=<nil>
	I1011 22:24:04.664048   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	W1011 22:24:04.664205   77741 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:04.666462   77741 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-070708" ...
	I1011 22:24:03.263908   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264434   77526 main.go:141] libmachine: (embed-certs-223942) Found IP for machine: 192.168.72.238
	I1011 22:24:03.264467   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has current primary IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264476   77526 main.go:141] libmachine: (embed-certs-223942) Reserving static IP address...
	I1011 22:24:03.264932   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.264964   77526 main.go:141] libmachine: (embed-certs-223942) Reserved static IP address: 192.168.72.238
	I1011 22:24:03.264984   77526 main.go:141] libmachine: (embed-certs-223942) DBG | skip adding static IP to network mk-embed-certs-223942 - found existing host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"}
	I1011 22:24:03.264995   77526 main.go:141] libmachine: (embed-certs-223942) Waiting for SSH to be available...
	I1011 22:24:03.265018   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Getting to WaitForSSH function...
	I1011 22:24:03.267171   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267556   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.267594   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267682   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH client type: external
	I1011 22:24:03.267720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa (-rw-------)
	I1011 22:24:03.267747   77526 main.go:141] libmachine: (embed-certs-223942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:03.267760   77526 main.go:141] libmachine: (embed-certs-223942) DBG | About to run SSH command:
	I1011 22:24:03.267767   77526 main.go:141] libmachine: (embed-certs-223942) DBG | exit 0
	I1011 22:24:03.390641   77526 main.go:141] libmachine: (embed-certs-223942) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:03.390996   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetConfigRaw
	I1011 22:24:03.391600   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.393909   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394224   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.394267   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394510   77526 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:24:03.394735   77526 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:03.394754   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:03.394941   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.396974   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397280   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.397298   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397414   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.397577   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397724   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397856   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.398095   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.398276   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.398285   77526 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:03.503029   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:03.503063   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503282   77526 buildroot.go:166] provisioning hostname "embed-certs-223942"
	I1011 22:24:03.503301   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503503   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.505943   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506300   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.506325   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506444   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.506595   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506769   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506899   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.507087   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.507247   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.507261   77526 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223942 && echo "embed-certs-223942" | sudo tee /etc/hostname
	I1011 22:24:03.626937   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223942
	
	I1011 22:24:03.626970   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.629752   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630038   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.630067   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630194   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.630370   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630665   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.630805   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.630988   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.631011   77526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223942/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:03.744196   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:03.744224   77526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:03.744247   77526 buildroot.go:174] setting up certificates
	I1011 22:24:03.744258   77526 provision.go:84] configureAuth start
	I1011 22:24:03.744270   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.744535   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.747114   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.747479   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747619   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.750238   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750626   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.750662   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750801   77526 provision.go:143] copyHostCerts
	I1011 22:24:03.750867   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:03.750890   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:03.750970   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:03.751094   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:03.751108   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:03.751146   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:03.751246   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:03.751257   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:03.751288   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:03.751360   77526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223942 san=[127.0.0.1 192.168.72.238 embed-certs-223942 localhost minikube]
	I1011 22:24:04.039983   77526 provision.go:177] copyRemoteCerts
	I1011 22:24:04.040046   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:04.040072   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.042846   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043130   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.043151   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043339   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.043530   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.043689   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.043836   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.124533   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:04.148503   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:24:04.172199   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:04.195175   77526 provision.go:87] duration metric: took 450.888581ms to configureAuth
	I1011 22:24:04.195203   77526 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:04.195381   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:04.195446   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.197839   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198189   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.198269   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.198561   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198730   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198875   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.199041   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.199217   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.199237   77526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:04.411621   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:04.411653   77526 machine.go:96] duration metric: took 1.016905055s to provisionDockerMachine
	I1011 22:24:04.411667   77526 start.go:293] postStartSetup for "embed-certs-223942" (driver="kvm2")
	I1011 22:24:04.411680   77526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:04.411699   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.411977   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:04.412003   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.414381   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414679   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.414722   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414835   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.415010   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.415144   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.415266   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.496916   77526 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:04.500935   77526 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:04.500956   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:04.501023   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:04.501115   77526 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:04.501222   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:04.510266   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:04.537636   77526 start.go:296] duration metric: took 125.956397ms for postStartSetup
	I1011 22:24:04.537678   77526 fix.go:56] duration metric: took 19.374596283s for fixHost
	I1011 22:24:04.537698   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.540344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540719   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.540742   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540838   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.541012   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541160   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541316   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.541474   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.541648   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.541659   77526 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:04.643243   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685444.617606783
	
	I1011 22:24:04.643266   77526 fix.go:216] guest clock: 1728685444.617606783
	I1011 22:24:04.643273   77526 fix.go:229] Guest: 2024-10-11 22:24:04.617606783 +0000 UTC Remote: 2024-10-11 22:24:04.537682618 +0000 UTC m=+287.234553168 (delta=79.924165ms)
	I1011 22:24:04.643312   77526 fix.go:200] guest clock delta is within tolerance: 79.924165ms
	I1011 22:24:04.643320   77526 start.go:83] releasing machines lock for "embed-certs-223942", held for 19.480305529s
	I1011 22:24:04.643344   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.643569   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:04.646344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646733   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.646766   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646918   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647366   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647519   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647644   77526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:04.647693   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.647723   77526 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:04.647748   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.649992   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650329   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650354   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650378   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650509   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.650676   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.650750   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650773   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650857   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.650959   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.651027   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.651081   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.651200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.651313   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.756500   77526 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:04.762420   77526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:04.901155   77526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:04.908234   77526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:04.908304   77526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:04.929972   77526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:04.929999   77526 start.go:495] detecting cgroup driver to use...
	I1011 22:24:04.930069   77526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:04.946899   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:04.960670   77526 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:04.960739   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:04.973981   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:04.987444   77526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:05.103114   77526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:05.251587   77526 docker.go:233] disabling docker service ...
	I1011 22:24:05.251662   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:05.266087   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:05.279209   77526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:05.431467   77526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:05.571151   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:05.584813   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:05.603563   77526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:05.603632   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.614924   77526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:05.614979   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.627625   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.638259   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.651521   77526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:05.663937   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.674307   77526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.696935   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.707464   77526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:05.717338   77526 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:05.717416   77526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:05.737811   77526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:05.749453   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:05.888144   77526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:05.984321   77526 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:05.984382   77526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:05.989389   77526 start.go:563] Will wait 60s for crictl version
	I1011 22:24:05.989447   77526 ssh_runner.go:195] Run: which crictl
	I1011 22:24:05.993333   77526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:06.033281   77526 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:06.033366   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.062164   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.092927   77526 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:06.094094   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:06.097442   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.097893   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:06.097941   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.098179   77526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:06.102566   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:06.116183   77526 kubeadm.go:883] updating cluster {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:06.116297   77526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:06.116347   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:06.164193   77526 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:06.164272   77526 ssh_runner.go:195] Run: which lz4
	I1011 22:24:06.168557   77526 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:06.173131   77526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:06.173165   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:04.667909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Start
	I1011 22:24:04.668056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring networks are active...
	I1011 22:24:04.668688   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network default is active
	I1011 22:24:04.668985   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network mk-default-k8s-diff-port-070708 is active
	I1011 22:24:04.669312   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Getting domain xml...
	I1011 22:24:04.669964   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Creating domain...
	I1011 22:24:05.931094   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting to get IP...
	I1011 22:24:05.932142   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932635   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932711   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:05.932622   78901 retry.go:31] will retry after 199.659438ms: waiting for machine to come up
	I1011 22:24:06.134036   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134479   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134504   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.134439   78901 retry.go:31] will retry after 379.083732ms: waiting for machine to come up
	I1011 22:24:06.515118   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515656   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515686   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.515599   78901 retry.go:31] will retry after 302.733712ms: waiting for machine to come up
	I1011 22:24:06.820188   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820629   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820657   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.820579   78901 retry.go:31] will retry after 466.059846ms: waiting for machine to come up
	I1011 22:24:07.288837   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289371   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.289302   78901 retry.go:31] will retry after 551.760501ms: waiting for machine to come up
	I1011 22:24:07.843026   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843561   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843590   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.843517   78901 retry.go:31] will retry after 626.896356ms: waiting for machine to come up
	I1011 22:24:07.621882   77526 crio.go:462] duration metric: took 1.453355137s to copy over tarball
	I1011 22:24:07.621973   77526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:09.732789   77526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110786947s)
	I1011 22:24:09.732823   77526 crio.go:469] duration metric: took 2.110914695s to extract the tarball
	I1011 22:24:09.732831   77526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:09.768649   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:09.811856   77526 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:09.811881   77526 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:09.811890   77526 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I1011 22:24:09.811991   77526 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:09.812087   77526 ssh_runner.go:195] Run: crio config
	I1011 22:24:09.857847   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:09.857869   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:09.857877   77526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:09.857896   77526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223942 NodeName:embed-certs-223942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:09.858025   77526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:09.858082   77526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:09.868276   77526 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:09.868346   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:09.877682   77526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1011 22:24:09.894551   77526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:09.911181   77526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1011 22:24:09.927972   77526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:09.931799   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:09.943650   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:10.071890   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:10.089627   77526 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942 for IP: 192.168.72.238
	I1011 22:24:10.089658   77526 certs.go:194] generating shared ca certs ...
	I1011 22:24:10.089680   77526 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:10.089851   77526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:10.089905   77526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:10.089916   77526 certs.go:256] generating profile certs ...
	I1011 22:24:10.090038   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/client.key
	I1011 22:24:10.090121   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key.0dabc30d
	I1011 22:24:10.090163   77526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key
	I1011 22:24:10.090323   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:10.090354   77526 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:10.090364   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:10.090392   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:10.090415   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:10.090438   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:10.090476   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:10.091225   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:10.117879   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:10.169586   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:10.210385   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:10.245240   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:24:10.274354   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:10.299943   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:10.324265   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:10.347352   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:10.370252   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:10.393715   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:10.420103   77526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:10.436668   77526 ssh_runner.go:195] Run: openssl version
	I1011 22:24:10.442525   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:10.453055   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457461   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457520   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.463121   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:10.473623   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:10.483653   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488022   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488075   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.493553   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:10.503833   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:10.514171   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518935   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518983   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.524479   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:10.534942   77526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:10.539385   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:10.545178   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:10.550886   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:10.556533   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:10.562024   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:10.567514   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:10.573018   77526 kubeadm.go:392] StartCluster: {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:10.573136   77526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:10.573206   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.609216   77526 cri.go:89] found id: ""
	I1011 22:24:10.609291   77526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:10.619945   77526 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:10.619976   77526 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:10.620024   77526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:10.629748   77526 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:10.631292   77526 kubeconfig.go:125] found "embed-certs-223942" server: "https://192.168.72.238:8443"
	I1011 22:24:10.634516   77526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:10.644773   77526 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.238
	I1011 22:24:10.644805   77526 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:10.644821   77526 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:10.644874   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.680074   77526 cri.go:89] found id: ""
	I1011 22:24:10.680146   77526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:10.696118   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:10.705765   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:10.705789   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:10.705845   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:10.714771   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:10.714837   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:10.724255   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:10.733433   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:10.733490   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:10.742649   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.751287   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:10.751350   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.760572   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:10.769447   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:10.769517   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:10.778829   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:10.788208   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:10.900288   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.733461   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.929225   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.001383   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.093971   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:12.094053   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:08.471765   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472154   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472178   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:08.472099   78901 retry.go:31] will retry after 1.132732814s: waiting for machine to come up
	I1011 22:24:09.606499   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607030   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:09.606975   78901 retry.go:31] will retry after 1.289031778s: waiting for machine to come up
	I1011 22:24:10.897474   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.897980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.898005   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:10.897925   78901 retry.go:31] will retry after 1.601197893s: waiting for machine to come up
	I1011 22:24:12.500563   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501072   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501100   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:12.501018   78901 retry.go:31] will retry after 1.772496409s: waiting for machine to come up
	I1011 22:24:12.594492   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.094823   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.594502   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.095004   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.109230   77526 api_server.go:72] duration metric: took 2.015258789s to wait for apiserver process to appear ...
	I1011 22:24:14.109265   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:14.109291   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.439696   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.439731   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.439747   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.515797   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.515834   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.610033   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.620048   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:16.620093   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.109593   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.116698   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.116729   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.609486   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.628000   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.628031   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:18.109663   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:18.115996   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:24:18.121780   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:18.121806   77526 api_server.go:131] duration metric: took 4.012533784s to wait for apiserver health ...
	I1011 22:24:18.121816   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:18.121823   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:18.123838   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:14.275892   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276364   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:14.276305   78901 retry.go:31] will retry after 2.71082021s: waiting for machine to come up
	I1011 22:24:16.989033   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989560   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989591   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:16.989521   78901 retry.go:31] will retry after 2.502509628s: waiting for machine to come up
	I1011 22:24:18.125325   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:18.137257   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:18.154806   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:18.164291   77526 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:18.164318   77526 system_pods.go:61] "coredns-7c65d6cfc9-w8zgx" [4a8fab25-6b1a-424f-982c-2def533eb1ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:18.164325   77526 system_pods.go:61] "etcd-embed-certs-223942" [95c77be2-4ed2-45b5-b1ad-abbd3bc6de78] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:18.164332   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [51fd81a8-25e1-4d2f-b6dc-42e1b277de54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:18.164338   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [17eda746-891b-44aa-800c-fabd818db753] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:18.164357   77526 system_pods.go:61] "kube-proxy-xz284" [a24b20d5-45dd-476c-8c91-07fd5cea511b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:18.164368   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [91bf2256-7d6e-4831-aab5-d59c4f801fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:18.164382   77526 system_pods.go:61] "metrics-server-6867b74b74-9xr4k" [fc1a267e-3cb7-40f6-8908-5b304f8f5b92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:18.164398   77526 system_pods.go:61] "storage-provisioner" [77ed79d9-66ba-4262-a972-e23ce8d1878c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:18.164412   77526 system_pods.go:74] duration metric: took 9.584328ms to wait for pod list to return data ...
	I1011 22:24:18.164421   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:18.167630   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:18.167650   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:18.167660   77526 node_conditions.go:105] duration metric: took 3.235822ms to run NodePressure ...
	I1011 22:24:18.167675   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:18.453597   77526 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457919   77526 kubeadm.go:739] kubelet initialised
	I1011 22:24:18.457937   77526 kubeadm.go:740] duration metric: took 4.320725ms waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457944   77526 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:18.462432   77526 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.468402   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468426   77526 pod_ready.go:82] duration metric: took 5.974992ms for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.468435   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468441   77526 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.475031   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475048   77526 pod_ready.go:82] duration metric: took 6.600211ms for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.475056   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475061   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.479729   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479748   77526 pod_ready.go:82] duration metric: took 4.679509ms for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.479756   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479762   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:20.487624   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:19.494990   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495353   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495384   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:19.495311   78901 retry.go:31] will retry after 2.761894966s: waiting for machine to come up
	I1011 22:24:22.260471   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has current primary IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260931   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Found IP for machine: 192.168.39.162
	I1011 22:24:22.260960   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserving static IP address...
	I1011 22:24:22.261363   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserved static IP address: 192.168.39.162
	I1011 22:24:22.261401   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.261416   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for SSH to be available...
	I1011 22:24:22.261457   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | skip adding static IP to network mk-default-k8s-diff-port-070708 - found existing host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"}
	I1011 22:24:22.261493   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Getting to WaitForSSH function...
	I1011 22:24:22.263356   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263736   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.263769   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263912   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH client type: external
	I1011 22:24:22.263936   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa (-rw-------)
	I1011 22:24:22.263959   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:22.263975   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | About to run SSH command:
	I1011 22:24:22.263991   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | exit 0
	I1011 22:24:22.391349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:22.391744   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetConfigRaw
	I1011 22:24:22.392361   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.394582   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.394953   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.394987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.395205   77741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:24:22.395391   77741 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:22.395408   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:22.395620   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.397851   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398185   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.398215   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398339   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.398517   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398671   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398810   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.398947   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.399226   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.399243   77741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:22.506891   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:22.506929   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507220   77741 buildroot.go:166] provisioning hostname "default-k8s-diff-port-070708"
	I1011 22:24:22.507252   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507437   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.510300   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510694   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.510728   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510830   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.511016   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511165   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511449   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.511588   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.511783   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.511800   77741 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-070708 && echo "default-k8s-diff-port-070708" | sudo tee /etc/hostname
	I1011 22:24:22.632639   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-070708
	
	I1011 22:24:22.632673   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.635224   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635536   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.635570   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.635881   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636018   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636166   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.636312   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.636503   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.636521   77741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-070708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-070708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-070708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:22.751402   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:22.751434   77741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:22.751490   77741 buildroot.go:174] setting up certificates
	I1011 22:24:22.751505   77741 provision.go:84] configureAuth start
	I1011 22:24:22.751522   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.751753   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.754256   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754611   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.754661   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.756857   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757175   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.757207   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757327   77741 provision.go:143] copyHostCerts
	I1011 22:24:22.757384   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:22.757405   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:22.757479   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:22.757577   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:22.757586   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:22.757607   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:22.757660   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:22.757667   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:22.757683   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:22.757738   77741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-070708 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-070708 localhost minikube]
	I1011 22:24:23.136674   77741 provision.go:177] copyRemoteCerts
	I1011 22:24:23.136726   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:23.136751   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.139576   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.139909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.139939   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.140104   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.140302   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.140446   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.140553   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.224552   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:23.248389   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1011 22:24:23.271533   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:23.294727   77741 provision.go:87] duration metric: took 543.206381ms to configureAuth
	I1011 22:24:23.294757   77741 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:23.295005   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:23.295092   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.297776   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298066   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.298102   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298225   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.298447   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298609   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298747   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.298880   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.299054   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.299068   77741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.522759   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:23.522787   77741 machine.go:96] duration metric: took 1.127384391s to provisionDockerMachine
	I1011 22:24:23.522801   77741 start.go:293] postStartSetup for "default-k8s-diff-port-070708" (driver="kvm2")
	I1011 22:24:23.522814   77741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:23.522834   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.523149   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:23.523186   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.526415   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.526905   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.526927   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.527101   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.527304   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.527442   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.527548   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.609520   77741 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:23.614158   77741 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:23.614183   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:23.614257   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:23.614349   77741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:23.614460   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:23.623839   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:23.649574   77741 start.go:296] duration metric: took 126.758615ms for postStartSetup
	I1011 22:24:23.649619   77741 fix.go:56] duration metric: took 19.006146927s for fixHost
	I1011 22:24:23.649643   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.652832   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653204   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.653234   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653439   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.653633   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653815   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.654158   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.654348   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.654362   77741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:23.763396   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685463.735816087
	
	I1011 22:24:23.763417   77741 fix.go:216] guest clock: 1728685463.735816087
	I1011 22:24:23.763435   77741 fix.go:229] Guest: 2024-10-11 22:24:23.735816087 +0000 UTC Remote: 2024-10-11 22:24:23.649624165 +0000 UTC m=+280.235201903 (delta=86.191922ms)
	I1011 22:24:23.763454   77741 fix.go:200] guest clock delta is within tolerance: 86.191922ms
	I1011 22:24:23.763459   77741 start.go:83] releasing machines lock for "default-k8s-diff-port-070708", held for 19.120018362s
	I1011 22:24:23.763483   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.763750   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:23.766956   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767357   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.767399   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767553   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768140   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768301   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768388   77741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:23.768438   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.768496   77741 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:23.768518   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.771106   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771145   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771526   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771567   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771589   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771605   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771855   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.771901   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772053   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.772102   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.772171   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772276   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.883476   77741 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:23.889434   77741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:24.036410   77741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:24.042728   77741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:24.042805   77741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:24.059112   77741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:24.059137   77741 start.go:495] detecting cgroup driver to use...
	I1011 22:24:24.059201   77741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:24.075267   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:24.088163   77741 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:24.088228   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:24.106336   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:24.123084   77741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:24.242599   77741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:24.411075   77741 docker.go:233] disabling docker service ...
	I1011 22:24:24.411159   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:24.430632   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:24.447508   77741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:24.617156   77741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:24.761101   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:24.776604   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:24.799678   77741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:24.799738   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.811501   77741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:24.811576   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.822565   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.833103   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.843670   77741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:24.855800   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.868918   77741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.886996   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.897487   77741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:24.907215   77741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:24.907263   77741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:24.920391   77741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:24.931383   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:25.065929   77741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:25.164594   77741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:25.164663   77741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:25.169492   77741 start.go:563] Will wait 60s for crictl version
	I1011 22:24:25.169540   77741 ssh_runner.go:195] Run: which crictl
	I1011 22:24:25.173355   77741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:25.220778   77741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:25.220876   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.253354   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.287095   77741 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:22.488407   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:24.988742   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:25.288116   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:25.291001   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291345   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:25.291390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291579   77741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:25.295805   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:25.308821   77741 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:25.308959   77741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:25.309019   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:25.353205   77741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:25.353271   77741 ssh_runner.go:195] Run: which lz4
	I1011 22:24:25.357765   77741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:25.362126   77741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:25.362168   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:26.741249   77741 crio.go:462] duration metric: took 1.383506027s to copy over tarball
	I1011 22:24:26.741392   77741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:27.486887   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.486911   77526 pod_ready.go:82] duration metric: took 9.007140273s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.486926   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492698   77526 pod_ready.go:93] pod "kube-proxy-xz284" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.492717   77526 pod_ready.go:82] duration metric: took 5.784843ms for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492726   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:29.499666   77526 pod_ready.go:103] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:32.137260   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:32.137292   77526 pod_ready.go:82] duration metric: took 4.644558899s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:32.137307   77526 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:28.851248   77741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1098168s)
	I1011 22:24:28.851285   77741 crio.go:469] duration metric: took 2.109983801s to extract the tarball
	I1011 22:24:28.851294   77741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:28.888408   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:28.933361   77741 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:28.933384   77741 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:28.933391   77741 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.31.1 crio true true} ...
	I1011 22:24:28.933510   77741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-070708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:28.933589   77741 ssh_runner.go:195] Run: crio config
	I1011 22:24:28.982515   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:28.982541   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:28.982554   77741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:28.982582   77741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-070708 NodeName:default-k8s-diff-port-070708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:28.982781   77741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-070708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:28.982862   77741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:28.993780   77741 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:28.993846   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:29.005252   77741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1011 22:24:29.023922   77741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:29.042177   77741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 22:24:29.059529   77741 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:29.063600   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:29.078061   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:29.204249   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:29.221115   77741 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708 for IP: 192.168.39.162
	I1011 22:24:29.221141   77741 certs.go:194] generating shared ca certs ...
	I1011 22:24:29.221161   77741 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:29.221349   77741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:29.221402   77741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:29.221413   77741 certs.go:256] generating profile certs ...
	I1011 22:24:29.221493   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/client.key
	I1011 22:24:29.221568   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key.07f8f6d8
	I1011 22:24:29.221645   77741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key
	I1011 22:24:29.221767   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:29.221803   77741 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:29.221812   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:29.221832   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:29.221853   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:29.221872   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:29.221929   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:29.222760   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:29.262636   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:29.308886   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:29.348949   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:29.378795   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 22:24:29.426593   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:29.465414   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:29.491216   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:29.518262   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:29.542270   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:29.565664   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:29.588852   77741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:29.606630   77741 ssh_runner.go:195] Run: openssl version
	I1011 22:24:29.612594   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:29.623089   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627591   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627656   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.633544   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:29.644199   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:29.654783   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661009   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661061   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.668950   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:29.684757   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:29.700687   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705578   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705646   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.711533   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:29.722714   77741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:29.727419   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:29.733494   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:29.739565   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:29.745569   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:29.751428   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:29.757368   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:29.763272   77741 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:29.763379   77741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:29.763436   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.805191   77741 cri.go:89] found id: ""
	I1011 22:24:29.805263   77741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:29.819025   77741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:29.819049   77741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:29.819098   77741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:29.828470   77741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:29.829347   77741 kubeconfig.go:125] found "default-k8s-diff-port-070708" server: "https://192.168.39.162:8444"
	I1011 22:24:29.831385   77741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:29.840601   77741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1011 22:24:29.840630   77741 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:29.840640   77741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:29.840691   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.880123   77741 cri.go:89] found id: ""
	I1011 22:24:29.880199   77741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:29.897250   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:29.908273   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:29.908293   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:29.908340   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:24:29.917052   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:29.917110   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:29.926121   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:24:29.935494   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:29.935552   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:29.944951   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.953829   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:29.953890   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.963554   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:24:29.972917   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:29.972979   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:29.981962   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:29.990859   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.116668   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.856369   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.204973   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.261641   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.313332   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:31.313450   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:31.814503   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.313812   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.813821   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.833106   77741 api_server.go:72] duration metric: took 1.519770408s to wait for apiserver process to appear ...
	I1011 22:24:32.833142   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:32.833166   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.028524   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.028557   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.028573   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.035621   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.035651   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.334128   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.339051   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.339075   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:35.833305   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.838821   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.838851   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:36.333367   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:36.338371   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:24:36.344660   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:36.344684   77741 api_server.go:131] duration metric: took 3.511533712s to wait for apiserver health ...
	I1011 22:24:36.344694   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:36.344703   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:36.346229   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:34.148281   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:36.645574   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:36.347449   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:36.361278   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:36.384091   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:36.399422   77741 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:36.399482   77741 system_pods.go:61] "coredns-7c65d6cfc9-bpv5v" [76f03ec1-b826-412f-8bb2-fcd555185dd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:36.399503   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [5f021850-47af-442e-81f9-fccf153afb5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:36.399521   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [12777485-8206-495d-9223-06574b1410a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:36.399557   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [4261e9f7-6e66-44d3-abbb-6fd541e62c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:36.399567   77741 system_pods.go:61] "kube-proxy-hsjth" [7ba3e685-be57-4e46-ac49-279bd32ca049] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:36.399575   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [1d170237-0bbe-4832-b5d2-cea7a11d5aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:36.399585   77741 system_pods.go:61] "metrics-server-6867b74b74-l7xbw" [998853a5-4215-4f3d-baa5-84e8f6bb91ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:36.399599   77741 system_pods.go:61] "storage-provisioner" [f618ffde-9d3a-43fd-999a-3855ac5de5d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:36.399612   77741 system_pods.go:74] duration metric: took 15.498192ms to wait for pod list to return data ...
	I1011 22:24:36.399627   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:36.403628   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:36.403652   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:36.403663   77741 node_conditions.go:105] duration metric: took 4.030681ms to run NodePressure ...
	I1011 22:24:36.403677   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:36.705101   77741 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710495   77741 kubeadm.go:739] kubelet initialised
	I1011 22:24:36.710514   77741 kubeadm.go:740] duration metric: took 5.389006ms waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710521   77741 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:36.715511   77741 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:39.144299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.144365   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:38.723972   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.221423   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:43.222431   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.627428   77373 start.go:364] duration metric: took 54.46189221s to acquireMachinesLock for "no-preload-390487"
	I1011 22:24:44.627494   77373 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:44.627505   77373 fix.go:54] fixHost starting: 
	I1011 22:24:44.627904   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:44.627943   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:44.647097   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I1011 22:24:44.647594   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:44.648124   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:24:44.648149   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:44.648538   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:44.648719   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:24:44.648881   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:24:44.650660   77373 fix.go:112] recreateIfNeeded on no-preload-390487: state=Stopped err=<nil>
	I1011 22:24:44.650685   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	W1011 22:24:44.650829   77373 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:44.652887   77373 out.go:177] * Restarting existing kvm2 VM for "no-preload-390487" ...
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:43.144430   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:45.645398   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.654550   77373 main.go:141] libmachine: (no-preload-390487) Calling .Start
	I1011 22:24:44.654840   77373 main.go:141] libmachine: (no-preload-390487) Ensuring networks are active...
	I1011 22:24:44.655546   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network default is active
	I1011 22:24:44.656008   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network mk-no-preload-390487 is active
	I1011 22:24:44.656383   77373 main.go:141] libmachine: (no-preload-390487) Getting domain xml...
	I1011 22:24:44.657065   77373 main.go:141] libmachine: (no-preload-390487) Creating domain...
	I1011 22:24:45.980644   77373 main.go:141] libmachine: (no-preload-390487) Waiting to get IP...
	I1011 22:24:45.981635   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:45.982101   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:45.982167   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:45.982078   79243 retry.go:31] will retry after 195.443447ms: waiting for machine to come up
	I1011 22:24:46.179539   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.179999   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.180030   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.179953   79243 retry.go:31] will retry after 322.117828ms: waiting for machine to come up
	I1011 22:24:46.503434   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.503947   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.503969   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.503915   79243 retry.go:31] will retry after 295.160677ms: waiting for machine to come up
	I1011 22:24:46.801184   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.801763   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.801797   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.801716   79243 retry.go:31] will retry after 396.903731ms: waiting for machine to come up
	I1011 22:24:47.200047   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.200515   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.200543   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.200480   79243 retry.go:31] will retry after 750.816077ms: waiting for machine to come up
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:44.735539   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:44.735561   77741 pod_ready.go:82] duration metric: took 8.020026677s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:44.735576   77741 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:46.744354   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:48.144609   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:50.149053   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:47.952867   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.953464   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.953495   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.953288   79243 retry.go:31] will retry after 639.218351ms: waiting for machine to come up
	I1011 22:24:48.594034   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:48.594428   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:48.594484   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:48.594409   79243 retry.go:31] will retry after 884.81772ms: waiting for machine to come up
	I1011 22:24:49.480960   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:49.481335   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:49.481362   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:49.481290   79243 retry.go:31] will retry after 1.298501886s: waiting for machine to come up
	I1011 22:24:50.781446   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:50.781854   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:50.781878   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:50.781800   79243 retry.go:31] will retry after 1.856156849s: waiting for machine to come up
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:49.242663   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:51.875476   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:53.411915   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.411937   77741 pod_ready.go:82] duration metric: took 8.676353233s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.411950   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418808   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.418827   77741 pod_ready.go:82] duration metric: took 6.869777ms for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418838   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428224   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.428257   77741 pod_ready.go:82] duration metric: took 9.411307ms for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428270   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438263   77741 pod_ready.go:93] pod "kube-proxy-hsjth" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.438293   77741 pod_ready.go:82] duration metric: took 10.015779ms for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438307   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444909   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.444932   77741 pod_ready.go:82] duration metric: took 6.618233ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444943   77741 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:52.646299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:55.144236   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:52.640024   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:52.640568   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:52.640600   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:52.640516   79243 retry.go:31] will retry after 1.634063154s: waiting for machine to come up
	I1011 22:24:54.275779   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:54.276278   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:54.276307   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:54.276222   79243 retry.go:31] will retry after 2.141763066s: waiting for machine to come up
	I1011 22:24:56.419913   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:56.420312   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:56.420333   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:56.420279   79243 retry.go:31] will retry after 3.322852036s: waiting for machine to come up
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.451196   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.452254   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.645338   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:00.142994   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.147083   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:59.745010   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:59.745433   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:59.745457   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:59.745377   79243 retry.go:31] will retry after 4.379442156s: waiting for machine to come up
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.953903   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.451156   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:04.127900   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has current primary IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128566   77373 main.go:141] libmachine: (no-preload-390487) Found IP for machine: 192.168.61.55
	I1011 22:25:04.128581   77373 main.go:141] libmachine: (no-preload-390487) Reserving static IP address...
	I1011 22:25:04.129112   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.129144   77373 main.go:141] libmachine: (no-preload-390487) DBG | skip adding static IP to network mk-no-preload-390487 - found existing host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"}
	I1011 22:25:04.129157   77373 main.go:141] libmachine: (no-preload-390487) Reserved static IP address: 192.168.61.55
	I1011 22:25:04.129170   77373 main.go:141] libmachine: (no-preload-390487) Waiting for SSH to be available...
	I1011 22:25:04.129179   77373 main.go:141] libmachine: (no-preload-390487) DBG | Getting to WaitForSSH function...
	I1011 22:25:04.131402   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131668   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.131698   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131864   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH client type: external
	I1011 22:25:04.131892   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa (-rw-------)
	I1011 22:25:04.131922   77373 main.go:141] libmachine: (no-preload-390487) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:25:04.131936   77373 main.go:141] libmachine: (no-preload-390487) DBG | About to run SSH command:
	I1011 22:25:04.131950   77373 main.go:141] libmachine: (no-preload-390487) DBG | exit 0
	I1011 22:25:04.258578   77373 main.go:141] libmachine: (no-preload-390487) DBG | SSH cmd err, output: <nil>: 
	I1011 22:25:04.258971   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetConfigRaw
	I1011 22:25:04.259663   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.262128   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262510   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.262542   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262838   77373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:25:04.263066   77373 machine.go:93] provisionDockerMachine start ...
	I1011 22:25:04.263088   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:04.263316   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.265560   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.265843   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.265862   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.266086   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.266277   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266448   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266597   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.266755   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.266968   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.266982   77373 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:25:04.375270   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:25:04.375306   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375541   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:25:04.375564   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375718   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.378706   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379069   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.379091   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379315   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.379515   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.380026   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.380213   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.380224   77373 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-390487 && echo "no-preload-390487" | sudo tee /etc/hostname
	I1011 22:25:04.503359   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-390487
	
	I1011 22:25:04.503392   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.506163   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506502   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.506537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506742   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.506924   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507077   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507332   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.507483   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.507660   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.507676   77373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-390487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-390487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-390487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:25:04.624804   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:25:04.624850   77373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:25:04.624880   77373 buildroot.go:174] setting up certificates
	I1011 22:25:04.624893   77373 provision.go:84] configureAuth start
	I1011 22:25:04.624909   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.625190   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.627950   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628278   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.628320   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628458   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.630686   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631012   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.631040   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631168   77373 provision.go:143] copyHostCerts
	I1011 22:25:04.631234   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:25:04.631255   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:25:04.631328   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:25:04.631438   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:25:04.631450   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:25:04.631488   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:25:04.631564   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:25:04.631575   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:25:04.631600   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:25:04.631668   77373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.no-preload-390487 san=[127.0.0.1 192.168.61.55 localhost minikube no-preload-390487]
	I1011 22:25:04.736741   77373 provision.go:177] copyRemoteCerts
	I1011 22:25:04.736802   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:25:04.736830   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.739358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739665   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.739695   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.740016   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.740156   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.740291   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:04.826024   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:25:04.851100   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:25:04.875010   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:25:04.899107   77373 provision.go:87] duration metric: took 274.198948ms to configureAuth
	I1011 22:25:04.899133   77373 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:25:04.899323   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:25:04.899405   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.901744   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902079   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.902108   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902320   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.902518   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902689   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902911   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.903095   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.903284   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.903304   77373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:25:05.129377   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:25:05.129406   77373 machine.go:96] duration metric: took 866.326736ms to provisionDockerMachine
	I1011 22:25:05.129420   77373 start.go:293] postStartSetup for "no-preload-390487" (driver="kvm2")
	I1011 22:25:05.129435   77373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:25:05.129455   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.129768   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:25:05.129798   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.132216   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132539   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.132579   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132703   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.132891   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.133039   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.133177   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.219144   77373 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:25:05.223510   77373 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:25:05.223549   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:25:05.223634   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:25:05.223728   77373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:25:05.223837   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:25:05.234069   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:05.259266   77373 start.go:296] duration metric: took 129.829951ms for postStartSetup
	I1011 22:25:05.259313   77373 fix.go:56] duration metric: took 20.631808044s for fixHost
	I1011 22:25:05.259335   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.262071   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262313   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.262340   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262493   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.262702   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.262899   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.263030   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.263243   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:05.263425   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:05.263470   77373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:25:05.367341   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685505.320713090
	
	I1011 22:25:05.367368   77373 fix.go:216] guest clock: 1728685505.320713090
	I1011 22:25:05.367378   77373 fix.go:229] Guest: 2024-10-11 22:25:05.32071309 +0000 UTC Remote: 2024-10-11 22:25:05.259318089 +0000 UTC m=+357.684959787 (delta=61.395001ms)
	I1011 22:25:05.367397   77373 fix.go:200] guest clock delta is within tolerance: 61.395001ms
	I1011 22:25:05.367409   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 20.739943225s
	I1011 22:25:05.367428   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.367673   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:05.370276   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370611   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.370648   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370815   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371423   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371608   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371674   77373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:25:05.371726   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.371914   77373 ssh_runner.go:195] Run: cat /version.json
	I1011 22:25:05.371939   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.374358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374730   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.374764   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374794   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374915   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375073   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375227   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375232   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.375256   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.375342   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.375449   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375560   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375714   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375819   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.482886   77373 ssh_runner.go:195] Run: systemctl --version
	I1011 22:25:05.489351   77373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:25:05.643786   77373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:25:05.650229   77373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:25:05.650296   77373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:25:05.666494   77373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:25:05.666522   77373 start.go:495] detecting cgroup driver to use...
	I1011 22:25:05.666582   77373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:25:05.683659   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:25:05.697066   77373 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:25:05.697119   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:25:05.712780   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:25:05.728824   77373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:25:05.844693   77373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:25:06.021006   77373 docker.go:233] disabling docker service ...
	I1011 22:25:06.021064   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:25:06.035844   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:25:06.049585   77373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:25:06.194294   77373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:25:06.333778   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:25:06.349522   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:25:06.370214   77373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:25:06.370285   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.380680   77373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:25:06.380751   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.390974   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.402539   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.414129   77373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:25:06.425521   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.435647   77373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.453454   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.463564   77373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:25:06.473487   77373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:25:06.473560   77373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:25:06.487972   77373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:25:06.498579   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:06.626975   77373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:25:06.736608   77373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:25:06.736681   77373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:25:06.742858   77373 start.go:563] Will wait 60s for crictl version
	I1011 22:25:06.742916   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:06.746699   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:25:06.785073   77373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:25:06.785172   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.812373   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.842453   77373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:25:04.645257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.143877   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.843849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:06.846526   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.846822   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:06.846870   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.847073   77373 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:25:06.851361   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:06.864316   77373 kubeadm.go:883] updating cluster {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:25:06.864426   77373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:25:06.864455   77373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:25:06.904225   77373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:25:06.904253   77373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:25:06.904307   77373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:06.904342   77373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.904360   77373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.904376   77373 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.904363   77373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.904475   77373 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.904499   77373 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 22:25:06.904480   77373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.905783   77373 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.905694   77373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.905680   77373 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.905686   77373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:07.057329   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.060095   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.080674   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1011 22:25:07.081598   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.085905   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.097740   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.106415   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.136780   77373 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1011 22:25:07.136834   77373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.136888   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.152692   77373 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1011 22:25:07.152730   77373 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.152784   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341838   77373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1011 22:25:07.341882   77373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.341890   77373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1011 22:25:07.341916   77373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.341929   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341947   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341973   77373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1011 22:25:07.341998   77373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1011 22:25:07.342007   77373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.342041   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.342014   77373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.342058   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.342053   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.342099   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.355230   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.355409   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.439441   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.439572   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.444043   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.444071   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.578269   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.578424   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.580474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.580516   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.580535   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.580606   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.451555   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.951138   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:09.144842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:11.643405   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.697848   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 22:25:07.697957   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.697984   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.722151   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1011 22:25:07.722269   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:07.734336   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 22:25:07.734449   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:07.734475   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.734489   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1011 22:25:07.734500   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 22:25:07.734508   77373 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734541   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734578   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:07.788345   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 22:25:07.788371   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1011 22:25:07.788446   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:07.816070   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1011 22:25:07.816308   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 22:25:07.816394   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:08.066781   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.943666   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.209065908s)
	I1011 22:25:09.943709   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1011 22:25:09.943750   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.20918304s)
	I1011 22:25:09.943771   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1011 22:25:09.943779   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.155317638s)
	I1011 22:25:09.943793   77373 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943796   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1011 22:25:09.943829   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.127421611s)
	I1011 22:25:09.943841   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943848   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1011 22:25:09.943878   77373 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.877065002s)
	I1011 22:25:09.943925   77373 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 22:25:09.943968   77373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.944013   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.951973   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:10.953032   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.644601   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.645917   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.641438   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.697578704s)
	I1011 22:25:13.641519   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1011 22:25:13.641523   77373 ssh_runner.go:235] Completed: which crictl: (3.697489585s)
	I1011 22:25:13.641556   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641597   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641598   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534810   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.893187916s)
	I1011 22:25:15.534865   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1011 22:25:15.534893   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.893219513s)
	I1011 22:25:15.534963   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534898   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:15.535027   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.452229   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.951490   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.952280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:18.143828   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:20.144712   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.707389   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.172401078s)
	I1011 22:25:17.707420   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.172369128s)
	I1011 22:25:17.707443   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1011 22:25:17.707474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:17.707476   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:17.707644   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:19.168147   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460475389s)
	I1011 22:25:19.168190   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1011 22:25:19.168156   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.460655676s)
	I1011 22:25:19.168221   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168242   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 22:25:19.168276   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168336   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.123906   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.955605804s)
	I1011 22:25:21.123945   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1011 22:25:21.123991   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.955631476s)
	I1011 22:25:21.124019   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1011 22:25:21.124030   77373 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.124068   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.773002   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 22:25:21.773050   77373 cache_images.go:123] Successfully loaded all cached images
	I1011 22:25:21.773057   77373 cache_images.go:92] duration metric: took 14.868794284s to LoadCachedImages
	I1011 22:25:21.773074   77373 kubeadm.go:934] updating node { 192.168.61.55 8443 v1.31.1 crio true true} ...
	I1011 22:25:21.773185   77373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-390487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:25:21.773265   77373 ssh_runner.go:195] Run: crio config
	I1011 22:25:21.821268   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:21.821291   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:21.821301   77373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:25:21.821321   77373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-390487 NodeName:no-preload-390487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:25:21.821490   77373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-390487"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:25:21.821564   77373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:25:21.832830   77373 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:25:21.832905   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:25:21.842726   77373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1011 22:25:21.859739   77373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:25:21.876192   77373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1011 22:25:21.893366   77373 ssh_runner.go:195] Run: grep 192.168.61.55	control-plane.minikube.internal$ /etc/hosts
	I1011 22:25:21.897435   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:21.909840   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:22.021697   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:25:22.039163   77373 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487 for IP: 192.168.61.55
	I1011 22:25:22.039187   77373 certs.go:194] generating shared ca certs ...
	I1011 22:25:22.039207   77373 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:25:22.039385   77373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:25:22.039449   77373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:25:22.039462   77373 certs.go:256] generating profile certs ...
	I1011 22:25:22.039587   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/client.key
	I1011 22:25:22.039668   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key.6a466d38
	I1011 22:25:22.039713   77373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key
	I1011 22:25:22.039858   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:25:22.039901   77373 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:25:22.039912   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:25:22.039959   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:25:22.040001   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:25:22.040029   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:25:22.040089   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:22.040914   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:25:22.077604   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:25:22.133879   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:25:22.164886   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:25:22.197655   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:25:22.229594   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:25:22.264506   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:25:22.287571   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:25:22.310555   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:25:22.333333   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:25:22.356094   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:25:22.380156   77373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:25:22.398056   77373 ssh_runner.go:195] Run: openssl version
	I1011 22:25:22.403799   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:25:22.415645   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420352   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420411   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.426457   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:25:22.438182   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:25:22.449704   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454778   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454840   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.460601   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:25:22.472587   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:25:22.485096   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489673   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489729   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.495547   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:25:22.507652   77373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:25:22.513081   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:25:22.519287   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:25:22.525159   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:25:22.531170   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:25:22.537321   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:25:22.543093   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:25:22.548832   77373 kubeadm.go:392] StartCluster: {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:25:22.548926   77373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:25:22.548972   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.594269   77373 cri.go:89] found id: ""
	I1011 22:25:22.594341   77373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:25:22.604950   77373 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:25:22.604976   77373 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:25:22.605025   77373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.452376   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.950987   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.644866   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:25.143773   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.144243   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.615035   77373 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:25:22.615951   77373 kubeconfig.go:125] found "no-preload-390487" server: "https://192.168.61.55:8443"
	I1011 22:25:22.618000   77373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:25:22.628327   77373 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.55
	I1011 22:25:22.628367   77373 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:25:22.628379   77373 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:25:22.628426   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.681709   77373 cri.go:89] found id: ""
	I1011 22:25:22.681769   77373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:25:22.697989   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:25:22.707772   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:25:22.707792   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:25:22.707838   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:25:22.716928   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:25:22.716984   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:25:22.726327   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:25:22.735769   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:25:22.735819   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:25:22.745468   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.754493   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:25:22.754552   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.764062   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:25:22.773234   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:25:22.773298   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:25:22.782913   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:25:22.792119   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:22.910184   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:23.868070   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.095326   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.164924   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.251769   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:25:24.251852   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.752110   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.252591   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.278468   77373 api_server.go:72] duration metric: took 1.026698113s to wait for apiserver process to appear ...
	I1011 22:25:25.278498   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:25:25.278521   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:25.278974   77373 api_server.go:269] stopped: https://192.168.61.55:8443/healthz: Get "https://192.168.61.55:8443/healthz": dial tcp 192.168.61.55:8443: connect: connection refused
	I1011 22:25:25.778778   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.951896   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.451534   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.012373   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.012412   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.012437   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.099444   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.099503   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.278723   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.284616   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.284647   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:28.779287   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.786100   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.786125   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:29.278680   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:29.285168   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:25:29.291497   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:25:29.291526   77373 api_server.go:131] duration metric: took 4.013020818s to wait for apiserver health ...
	I1011 22:25:29.291537   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:29.291545   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:29.293325   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:25:29.644410   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:32.144466   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:29.294582   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:25:29.306107   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:25:29.331655   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:25:29.346931   77373 system_pods.go:59] 8 kube-system pods found
	I1011 22:25:29.346973   77373 system_pods.go:61] "coredns-7c65d6cfc9-5z4p5" [a369ddfd-01d5-4d2a-a63b-ab36b26f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:25:29.346986   77373 system_pods.go:61] "etcd-no-preload-390487" [b9aa7965-9be2-43b4-a291-246e5f27fa00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:25:29.346998   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [17e9a39a-2084-4504-8f9c-602cad87536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:25:29.347004   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [c4dc9017-6062-444e-b11f-23762dc5ef3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:25:29.347010   77373 system_pods.go:61] "kube-proxy-82p2c" [555091e0-b40d-49a6-a964-80baf143c001] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:25:29.347029   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [dcfc8186-23f5-4744-93f8-080180f93be6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:25:29.347034   77373 system_pods.go:61] "metrics-server-6867b74b74-tk8fq" [8fb649e0-2af0-4655-8251-356873e2213e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:25:29.347041   77373 system_pods.go:61] "storage-provisioner" [a01f8ac1-6d29-4885-86a7-c7ef0c289b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:25:29.347047   77373 system_pods.go:74] duration metric: took 15.369022ms to wait for pod list to return data ...
	I1011 22:25:29.347055   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:25:29.352543   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:25:29.352576   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:25:29.352590   77373 node_conditions.go:105] duration metric: took 5.52943ms to run NodePressure ...
	I1011 22:25:29.352613   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:29.648681   77373 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652653   77373 kubeadm.go:739] kubelet initialised
	I1011 22:25:29.652671   77373 kubeadm.go:740] duration metric: took 3.972281ms waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652679   77373 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:25:29.658454   77373 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.663740   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663768   77373 pod_ready.go:82] duration metric: took 5.289381ms for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.663780   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663791   77373 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.668667   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668693   77373 pod_ready.go:82] duration metric: took 4.892171ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.668704   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668714   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.673134   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673157   77373 pod_ready.go:82] duration metric: took 4.432292ms for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.673168   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673177   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.734940   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734965   77373 pod_ready.go:82] duration metric: took 61.774649ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.734974   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734980   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134816   77373 pod_ready.go:93] pod "kube-proxy-82p2c" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:30.134843   77373 pod_ready.go:82] duration metric: took 399.851043ms for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134856   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:32.143137   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.451926   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:31.452961   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.145457   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.643721   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.143610   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.641435   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.951339   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:35.952408   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.450537   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.644336   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.144815   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.642041   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.143153   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.641922   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:41.641949   77373 pod_ready.go:82] duration metric: took 11.507084936s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:41.641962   77373 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.451326   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:42.451670   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.643191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.643486   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.648037   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.648090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.451907   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:46.950763   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.144086   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.144203   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.144498   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:47.649490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.148831   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.148997   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.451637   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:51.952434   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.643929   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.645968   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.149692   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.649774   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:54.456563   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.952097   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.143734   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.146350   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.148063   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.148608   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:59.451436   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.452136   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.643807   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.644664   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.149089   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.152410   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:26:03.952386   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.952562   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.645291   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.145051   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.146419   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.650259   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.149242   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.149684   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.452508   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.952101   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.953125   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.645067   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.646842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.149723   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.649874   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:15.451781   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:17.951419   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.144547   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.145544   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.151415   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.649239   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:20.450507   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.450680   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:23.643586   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.144617   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:24.149159   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.649041   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:24.451584   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.451685   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.643757   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.143786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.650003   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.148367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:28.952410   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.450990   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.644354   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.648376   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:33.951428   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.951901   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.143140   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.144224   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.649082   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:39.650580   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.148945   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.451431   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.951642   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.951753   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.644548   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.144055   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.649705   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.148731   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.952282   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.451649   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.643384   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:52.143369   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.150143   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.648664   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:49.951912   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.955275   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.144438   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.145153   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.149232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.648245   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.451964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.952084   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:58.643527   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:00.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.148916   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.452217   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.951461   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:03.143621   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:05.644225   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:04.148533   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.149378   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:03.952598   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.451802   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:07.645531   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.144141   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.144739   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:08.648935   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.649185   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:08.951257   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.951915   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:13.451012   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:14.643844   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:16.643951   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.651766   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.148176   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.148231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:15.452119   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.951679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.144439   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.644786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.647581   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.649450   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:19.952158   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.952425   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.143206   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.143587   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.148823   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:27:24.452366   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.950804   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:28.143916   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:30.644830   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:29.149277   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.648385   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:28.952135   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.452143   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.143604   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:35.149383   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.649481   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.148545   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:33.951885   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.450920   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:37.643707   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.644162   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.143479   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:38.649090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:41.148009   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:38.451524   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:40.954861   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:44.143555   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:46.143976   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:43.149295   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.648165   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.451177   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.451493   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.951335   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.145191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.643798   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.648963   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.150518   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.951782   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.451312   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:53.143194   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.143896   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.144275   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.647882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:54.648946   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:56.649832   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:54.951866   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.450974   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.144335   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.144508   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.148539   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.652535   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:59.452019   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.951319   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:03.643904   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.142780   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.149856   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:03.951539   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:05.955152   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.450679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.143240   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.144659   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.649122   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:11.148403   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.451221   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.952631   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.644135   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.143192   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.144402   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.148449   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.648534   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:15.450809   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.451351   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.644591   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.144568   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:18.147818   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:20.150891   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.951122   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:21.951323   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.643512   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:27.143639   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.648755   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.648851   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:24.450975   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.451800   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:29.147583   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.644088   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:32.137388   77526 pod_ready.go:82] duration metric: took 4m0.000065444s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:32.137437   77526 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:32.137454   77526 pod_ready.go:39] duration metric: took 4m13.67950194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:32.137478   77526 kubeadm.go:597] duration metric: took 4m21.517496572s to restartPrimaryControlPlane
	W1011 22:28:32.137532   77526 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:32.137562   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:29.150291   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.649055   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:28.951749   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:30.952578   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.149312   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:36.648952   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:33.451778   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:35.951964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:39.148016   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:41.149360   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.454387   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:40.952061   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:43.649642   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:46.148528   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:43.451280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:45.952227   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.451044   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.149006   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.649552   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.451478   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:52.951299   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:53.445808   77741 pod_ready.go:82] duration metric: took 4m0.000846456s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:53.445846   77741 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:53.445869   77741 pod_ready.go:39] duration metric: took 4m16.735338637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:53.445899   77741 kubeadm.go:597] duration metric: took 4m23.626843864s to restartPrimaryControlPlane
	W1011 22:28:53.445964   77741 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:53.445996   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:53.148309   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:55.149231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.243748   77526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.106148594s)
	I1011 22:28:58.243837   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:58.263915   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:58.281349   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:58.297636   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:58.297661   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:58.297710   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:58.311371   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:58.311444   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:58.330584   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:58.350348   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:58.350403   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:58.376417   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.390350   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:58.390399   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.404955   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:58.416263   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:58.416322   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:58.425942   77526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:58.478782   77526 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:28:58.478835   77526 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:58.590185   77526 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:58.590333   77526 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:58.590451   77526 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:28:58.598371   77526 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:58.600253   77526 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:58.600357   77526 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:58.600458   77526 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:58.600569   77526 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:58.600657   77526 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:58.600761   77526 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:58.600827   77526 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:58.600913   77526 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:58.601018   77526 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:58.601122   77526 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:58.601250   77526 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:58.601335   77526 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:58.601417   77526 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:58.951248   77526 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:59.187453   77526 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:28:59.496055   77526 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:59.583363   77526 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:59.747699   77526 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:59.748339   77526 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:59.750963   77526 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:59.752710   77526 out.go:235]   - Booting up control plane ...
	I1011 22:28:59.752858   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:59.752956   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:59.753174   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:59.770682   77526 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:59.776919   77526 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:59.776989   77526 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:59.900964   77526 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:28:59.901122   77526 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:00.402400   77526 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.862362ms
	I1011 22:29:00.402529   77526 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:28:57.648367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:00.148371   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:02.153536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:05.905921   77526 kubeadm.go:310] [api-check] The API server is healthy after 5.501955207s
	I1011 22:29:05.918054   77526 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:05.936720   77526 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:05.982293   77526 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:05.982571   77526 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-223942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:06.007168   77526 kubeadm.go:310] [bootstrap-token] Using token: a4lu2p.4yfrrazoy97j5yu0
	I1011 22:29:06.008642   77526 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:06.008749   77526 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:06.020393   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:06.032191   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:06.039269   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:06.043990   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:06.053648   77526 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:06.312388   77526 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:06.740160   77526 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:07.315305   77526 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:07.317697   77526 kubeadm.go:310] 
	I1011 22:29:07.317793   77526 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:07.317806   77526 kubeadm.go:310] 
	I1011 22:29:07.317929   77526 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:07.317950   77526 kubeadm.go:310] 
	I1011 22:29:07.318009   77526 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:07.318126   77526 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:07.318222   77526 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:07.318232   77526 kubeadm.go:310] 
	I1011 22:29:07.318281   77526 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:07.318289   77526 kubeadm.go:310] 
	I1011 22:29:07.318339   77526 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:07.318350   77526 kubeadm.go:310] 
	I1011 22:29:07.318424   77526 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:07.318528   77526 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:07.318630   77526 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:07.318644   77526 kubeadm.go:310] 
	I1011 22:29:07.318750   77526 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:07.318823   77526 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:07.318830   77526 kubeadm.go:310] 
	I1011 22:29:07.318913   77526 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319086   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:07.319124   77526 kubeadm.go:310] 	--control-plane 
	I1011 22:29:07.319133   77526 kubeadm.go:310] 
	I1011 22:29:07.319256   77526 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:07.319264   77526 kubeadm.go:310] 
	I1011 22:29:07.319366   77526 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319505   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:07.321368   77526 kubeadm.go:310] W1011 22:28:58.449635    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321691   77526 kubeadm.go:310] W1011 22:28:58.450407    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321866   77526 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:07.321888   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:29:07.321899   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:07.323580   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:07.324762   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:07.335614   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:04.648441   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:06.648506   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:07.354851   77526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:07.355473   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:07.355479   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-223942 minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=embed-certs-223942 minikube.k8s.io/primary=true
	I1011 22:29:07.397703   77526 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:07.581167   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.081395   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.582200   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.081862   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.581361   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.081246   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.581754   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.081988   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.179021   77526 kubeadm.go:1113] duration metric: took 3.82416989s to wait for elevateKubeSystemPrivileges
	I1011 22:29:11.179061   77526 kubeadm.go:394] duration metric: took 5m0.606049956s to StartCluster
	I1011 22:29:11.179086   77526 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.179171   77526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:11.181572   77526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.181873   77526 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:11.181938   77526 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:11.182035   77526 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223942"
	I1011 22:29:11.182059   77526 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223942"
	I1011 22:29:11.182060   77526 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223942"
	W1011 22:29:11.182070   77526 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:11.182078   77526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223942"
	I1011 22:29:11.182102   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182114   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:11.182091   77526 addons.go:69] Setting metrics-server=true in profile "embed-certs-223942"
	I1011 22:29:11.182147   77526 addons.go:234] Setting addon metrics-server=true in "embed-certs-223942"
	W1011 22:29:11.182161   77526 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:11.182196   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182515   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182558   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182579   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182692   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.183573   77526 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:11.184930   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:11.198456   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1011 22:29:11.198666   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I1011 22:29:11.199044   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199141   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199592   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199607   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199726   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199744   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199950   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200104   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200248   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.200557   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.200608   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.201637   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1011 22:29:11.202066   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.202541   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.202560   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.202894   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.203434   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.203474   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.204227   77526 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223942"
	W1011 22:29:11.204249   77526 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:11.204281   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.204663   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.204707   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.218765   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I1011 22:29:11.218894   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I1011 22:29:11.219238   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219244   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219747   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219772   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.219949   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219970   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.220019   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220167   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220232   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220785   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220847   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1011 22:29:11.221152   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.221591   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.221614   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.222116   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.222135   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222401   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222916   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.222955   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.224006   77526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:11.224007   77526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:11.225424   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:11.225455   77526 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:11.225474   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.226095   77526 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.226115   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:11.226131   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.228914   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229448   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.229472   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229542   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229584   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.229744   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230021   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.230025   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230037   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.230118   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.230496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.230648   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230781   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230897   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.238742   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1011 22:29:11.239211   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.239762   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.239786   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.240061   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.240238   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.241740   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.241967   77526 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.241986   77526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:11.242007   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.244886   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245237   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.245260   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245501   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.245684   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.245882   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.246052   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.365926   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:11.391766   77526 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401923   77526 node_ready.go:49] node "embed-certs-223942" has status "Ready":"True"
	I1011 22:29:11.401943   77526 node_ready.go:38] duration metric: took 10.139287ms for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401952   77526 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:11.406561   77526 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:11.460959   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:11.460992   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:11.475600   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.481436   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:11.481465   77526 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:11.515478   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.515500   77526 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:11.558164   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.569398   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.795782   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.795805   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796093   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:11.796119   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796137   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.796152   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.796163   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796373   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796389   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809155   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.809176   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.809439   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.809457   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809463   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475441   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475469   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.475720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475769   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.475789   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.475805   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475815   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.476016   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.476027   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.476031   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.476041   77526 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223942"
	I1011 22:29:12.503190   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503219   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503530   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503574   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.503588   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503598   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503834   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503850   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.505379   77526 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1011 22:29:09.149809   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:11.650232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:12.506382   77526 addons.go:510] duration metric: took 1.324453305s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1011 22:29:13.412840   77526 pod_ready.go:103] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:13.918905   77526 pod_ready.go:93] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:13.918926   77526 pod_ready.go:82] duration metric: took 2.512345346s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:13.918936   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:15.925307   77526 pod_ready.go:103] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:14.149051   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:16.649622   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:17.925327   77526 pod_ready.go:93] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.925353   77526 pod_ready.go:82] duration metric: took 4.006410198s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.925366   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929846   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.929872   77526 pod_ready.go:82] duration metric: took 4.495642ms for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929883   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933635   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.933652   77526 pod_ready.go:82] duration metric: took 3.761139ms for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933661   77526 pod_ready.go:39] duration metric: took 6.531698315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:17.933677   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:17.933732   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:17.950153   77526 api_server.go:72] duration metric: took 6.768243331s to wait for apiserver process to appear ...
	I1011 22:29:17.950174   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:17.950192   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:29:17.953743   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:29:17.954586   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:17.954610   77526 api_server.go:131] duration metric: took 4.428307ms to wait for apiserver health ...
	I1011 22:29:17.954629   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:17.959411   77526 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:17.959432   77526 system_pods.go:61] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.959438   77526 system_pods.go:61] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.959443   77526 system_pods.go:61] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.959447   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.959451   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.959454   77526 system_pods.go:61] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.959457   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.959468   77526 system_pods.go:61] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.959473   77526 system_pods.go:61] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.959480   77526 system_pods.go:74] duration metric: took 4.84106ms to wait for pod list to return data ...
	I1011 22:29:17.959488   77526 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:17.962273   77526 default_sa.go:45] found service account: "default"
	I1011 22:29:17.962294   77526 default_sa.go:55] duration metric: took 2.80012ms for default service account to be created ...
	I1011 22:29:17.962302   77526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:17.966653   77526 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:17.966675   77526 system_pods.go:89] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.966681   77526 system_pods.go:89] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.966686   77526 system_pods.go:89] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.966691   77526 system_pods.go:89] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.966695   77526 system_pods.go:89] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.966698   77526 system_pods.go:89] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.966702   77526 system_pods.go:89] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.966741   77526 system_pods.go:89] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.966751   77526 system_pods.go:89] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.966759   77526 system_pods.go:126] duration metric: took 4.452902ms to wait for k8s-apps to be running ...
	I1011 22:29:17.966766   77526 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:17.966807   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:17.982751   77526 system_svc.go:56] duration metric: took 15.979158ms WaitForService to wait for kubelet
	I1011 22:29:17.982770   77526 kubeadm.go:582] duration metric: took 6.800865436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:17.982788   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:17.985340   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:17.985361   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:17.985373   77526 node_conditions.go:105] duration metric: took 2.578879ms to run NodePressure ...
	I1011 22:29:17.985385   77526 start.go:241] waiting for startup goroutines ...
	I1011 22:29:17.985398   77526 start.go:246] waiting for cluster config update ...
	I1011 22:29:17.985415   77526 start.go:255] writing updated cluster config ...
	I1011 22:29:17.985668   77526 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:18.034091   77526 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:18.036159   77526 out.go:177] * Done! kubectl is now configured to use "embed-certs-223942" cluster and "default" namespace by default
	I1011 22:29:19.671974   77741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225955809s)
	I1011 22:29:19.672048   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:19.689229   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:29:19.701141   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:29:19.714596   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:29:19.714630   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:29:19.714674   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:29:19.729207   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:29:19.729273   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:29:19.739052   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:29:19.748101   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:29:19.748162   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:29:19.757518   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.766689   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:29:19.766754   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.776197   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:29:19.785329   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:29:19.785381   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:29:19.794742   77741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:29:19.837814   77741 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:29:19.837936   77741 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:29:19.956401   77741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:29:19.956502   77741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:29:19.956574   77741 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:29:19.965603   77741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:29:19.967637   77741 out.go:235]   - Generating certificates and keys ...
	I1011 22:29:19.967726   77741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:29:19.967793   77741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:29:19.967875   77741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:29:19.967965   77741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:29:19.968066   77741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:29:19.968139   77741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:29:19.968224   77741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:29:19.968319   77741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:29:19.968435   77741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:29:19.968545   77741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:29:19.968608   77741 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:29:19.968701   77741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:29:20.266256   77741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:29:20.353124   77741 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:29:20.693912   77741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:29:20.814227   77741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:29:21.028714   77741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:29:21.029382   77741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:29:21.032065   77741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:29:19.149346   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.648583   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.033900   77741 out.go:235]   - Booting up control plane ...
	I1011 22:29:21.034020   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:29:21.034134   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:29:21.034236   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:29:21.053259   77741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:29:21.060157   77741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:29:21.060229   77741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:29:21.190140   77741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:29:21.190325   77741 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:21.691954   77741 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78398ms
	I1011 22:29:21.692069   77741 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:29:26.696518   77741 kubeadm.go:310] [api-check] The API server is healthy after 5.002229227s
	I1011 22:29:26.710581   77741 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:26.726686   77741 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:26.759596   77741 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:26.759894   77741 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-070708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:26.769529   77741 kubeadm.go:310] [bootstrap-token] Using token: dhosfn.441jcramrxgiydi4
	I1011 22:29:24.149380   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.647490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.770660   77741 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:26.770801   77741 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:26.775859   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:26.783572   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:26.789736   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:26.793026   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:26.797814   77741 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:27.102055   77741 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:27.537636   77741 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:28.102099   77741 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:28.103130   77741 kubeadm.go:310] 
	I1011 22:29:28.103241   77741 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:28.103264   77741 kubeadm.go:310] 
	I1011 22:29:28.103371   77741 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:28.103379   77741 kubeadm.go:310] 
	I1011 22:29:28.103400   77741 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:28.103454   77741 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:28.103506   77741 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:28.103510   77741 kubeadm.go:310] 
	I1011 22:29:28.103565   77741 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:28.103569   77741 kubeadm.go:310] 
	I1011 22:29:28.103618   77741 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:28.103624   77741 kubeadm.go:310] 
	I1011 22:29:28.103666   77741 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:28.103778   77741 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:28.103874   77741 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:28.103882   77741 kubeadm.go:310] 
	I1011 22:29:28.103960   77741 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:28.104023   77741 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:28.104029   77741 kubeadm.go:310] 
	I1011 22:29:28.104096   77741 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104179   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:28.104199   77741 kubeadm.go:310] 	--control-plane 
	I1011 22:29:28.104205   77741 kubeadm.go:310] 
	I1011 22:29:28.104271   77741 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:28.104277   77741 kubeadm.go:310] 
	I1011 22:29:28.104384   77741 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104513   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:28.105322   77741 kubeadm.go:310] W1011 22:29:19.811300    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105623   77741 kubeadm.go:310] W1011 22:29:19.812133    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105772   77741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:28.105796   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:29:28.105808   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:28.107671   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:28.108911   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:28.121190   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:28.143442   77741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:28.143523   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.143537   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-070708 minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=default-k8s-diff-port-070708 minikube.k8s.io/primary=true
	I1011 22:29:28.380171   77741 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:28.380244   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.649448   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:31.147882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:28.880541   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.380686   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.880953   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.381236   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.880946   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.380516   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.880841   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.380874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.880874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.969809   77741 kubeadm.go:1113] duration metric: took 4.826361525s to wait for elevateKubeSystemPrivileges
	I1011 22:29:32.969844   77741 kubeadm.go:394] duration metric: took 5m3.206576288s to StartCluster
	I1011 22:29:32.969864   77741 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.969949   77741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:32.972053   77741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.972321   77741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:32.972419   77741 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:32.972545   77741 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972564   77741 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972572   77741 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:32.972580   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:32.972577   77741 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972601   77741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-070708"
	I1011 22:29:32.972590   77741 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972621   77741 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972631   77741 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:32.972676   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972605   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972952   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.972982   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973051   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973088   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973111   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973143   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973995   77741 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:32.975387   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:32.989010   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1011 22:29:32.989449   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.989866   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I1011 22:29:32.990100   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990127   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.990213   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.990478   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.990668   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990692   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.991068   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991071   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.991110   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1011 22:29:32.991671   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991703   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991966   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.992453   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.992486   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.992808   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.992950   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:32.995986   77741 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.996004   77741 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:32.996031   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.996271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.996311   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.010650   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I1011 22:29:33.010949   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1011 22:29:33.011111   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011350   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I1011 22:29:33.011490   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.011509   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.011838   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011936   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012113   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.012272   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012283   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.012338   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.012663   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012877   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012897   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.013271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:33.013307   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.013511   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.013691   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.014538   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.015400   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.016387   77741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:33.017187   77741 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:33.018090   77741 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.018111   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:33.018130   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.018972   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:33.018994   77741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:33.019015   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.021827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022205   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.022226   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.022513   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.022704   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.022865   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.023070   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023552   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.023574   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.024067   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.024222   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.024376   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.030089   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I1011 22:29:33.030477   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.030929   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.030954   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.031352   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.031571   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.033098   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.033335   77741 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.033351   77741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:33.033366   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.036390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.036758   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.036780   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.037025   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.037173   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.037322   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.037467   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.201955   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:33.220870   77741 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229595   77741 node_ready.go:49] node "default-k8s-diff-port-070708" has status "Ready":"True"
	I1011 22:29:33.229615   77741 node_ready.go:38] duration metric: took 8.713422ms for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229623   77741 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:33.237626   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:33.298146   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:33.298166   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:33.308268   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.320862   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.346501   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:33.346536   77741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:33.406404   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.406435   77741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:33.480527   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.629133   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629162   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.629545   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.629564   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.629565   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.629616   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629625   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.630896   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.630904   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.630918   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.636620   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.636640   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.636979   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.636989   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.637001   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305476   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305507   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.305773   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.305798   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305809   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305821   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.306123   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.306168   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.306128   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.756210   77741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.275647241s)
	I1011 22:29:34.756257   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756271   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756536   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756558   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756567   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756575   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756844   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756891   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756911   77741 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-070708"
	I1011 22:29:34.756872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.759057   77741 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1011 22:29:33.148846   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:35.649536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:34.760328   77741 addons.go:510] duration metric: took 1.787917365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1011 22:29:34.764676   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:34.764703   77741 pod_ready.go:82] duration metric: took 1.527054334s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:34.764716   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773717   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.773739   77741 pod_ready.go:82] duration metric: took 1.009014594s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773747   77741 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779537   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.779554   77741 pod_ready.go:82] duration metric: took 5.801388ms for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779562   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785272   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:36.785302   77741 pod_ready.go:82] duration metric: took 1.005732291s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785316   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:38.790774   77741 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.790257   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.790285   77741 pod_ready.go:82] duration metric: took 4.004960127s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.790298   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794434   77741 pod_ready.go:93] pod "kube-proxy-f5jxp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.794457   77741 pod_ready.go:82] duration metric: took 4.15174ms for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794468   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797928   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.797942   77741 pod_ready.go:82] duration metric: took 3.468527ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797949   77741 pod_ready.go:39] duration metric: took 7.568316879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:40.797960   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:40.798002   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:40.813652   77741 api_server.go:72] duration metric: took 7.841294422s to wait for apiserver process to appear ...
	I1011 22:29:40.813672   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:40.813689   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:29:40.817412   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:29:40.818090   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:40.818107   77741 api_server.go:131] duration metric: took 4.42852ms to wait for apiserver health ...
	I1011 22:29:40.818114   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:40.823188   77741 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:40.823213   77741 system_pods.go:61] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:40.823221   77741 system_pods.go:61] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:40.823227   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:40.823233   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:40.823248   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:40.823255   77741 system_pods.go:61] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:40.823263   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:40.823273   77741 system_pods.go:61] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:40.823284   77741 system_pods.go:61] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:40.823296   77741 system_pods.go:74] duration metric: took 5.17626ms to wait for pod list to return data ...
	I1011 22:29:40.823307   77741 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:40.825321   77741 default_sa.go:45] found service account: "default"
	I1011 22:29:40.825336   77741 default_sa.go:55] duration metric: took 2.021143ms for default service account to be created ...
	I1011 22:29:40.825342   77741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:41.026940   77741 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:41.026968   77741 system_pods.go:89] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:41.026973   77741 system_pods.go:89] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:41.026978   77741 system_pods.go:89] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:41.026982   77741 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:41.026985   77741 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:41.026989   77741 system_pods.go:89] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:41.026992   77741 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:41.026998   77741 system_pods.go:89] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:41.027001   77741 system_pods.go:89] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:41.027009   77741 system_pods.go:126] duration metric: took 201.663243ms to wait for k8s-apps to be running ...
	I1011 22:29:41.027026   77741 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:41.027069   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:41.042219   77741 system_svc.go:56] duration metric: took 15.183864ms WaitForService to wait for kubelet
	I1011 22:29:41.042245   77741 kubeadm.go:582] duration metric: took 8.069890136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:41.042260   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:41.224020   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:41.224044   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:41.224057   77741 node_conditions.go:105] duration metric: took 181.791827ms to run NodePressure ...
	I1011 22:29:41.224070   77741 start.go:241] waiting for startup goroutines ...
	I1011 22:29:41.224078   77741 start.go:246] waiting for cluster config update ...
	I1011 22:29:41.224091   77741 start.go:255] writing updated cluster config ...
	I1011 22:29:41.224324   77741 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:41.270922   77741 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:41.272826   77741 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-070708" cluster and "default" namespace by default
	I1011 22:29:38.149579   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.648994   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:41.642042   77373 pod_ready.go:82] duration metric: took 4m0.000063385s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	E1011 22:29:41.642084   77373 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 22:29:41.642099   77373 pod_ready.go:39] duration metric: took 4m11.989411916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:41.642124   77373 kubeadm.go:597] duration metric: took 4m19.037142189s to restartPrimaryControlPlane
	W1011 22:29:41.642171   77373 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:29:41.642194   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:08.331378   77373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.689152762s)
	I1011 22:30:08.331467   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:08.348300   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:30:08.359480   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:08.370317   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:08.370344   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:08.370400   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:08.381317   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:08.381392   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:08.392591   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:08.403628   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:08.403695   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:08.415304   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.425512   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:08.425585   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.436525   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:08.447575   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:08.447644   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:08.458910   77373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:08.507988   77373 kubeadm.go:310] W1011 22:30:08.465544    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.508469   77373 kubeadm.go:310] W1011 22:30:08.466388    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.640893   77373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:16.843613   77373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:30:16.843665   77373 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:16.843739   77373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:16.843849   77373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:16.843963   77373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:30:16.844020   77373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:16.845663   77373 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:16.845745   77373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:16.845804   77373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:16.845880   77373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:16.845929   77373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:16.845994   77373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:16.846041   77373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:16.846094   77373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:16.846145   77373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:16.846207   77373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:16.846272   77373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:16.846305   77373 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:16.846355   77373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:16.846402   77373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:16.846453   77373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:30:16.846503   77373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:16.846566   77373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:16.846663   77373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:16.846762   77373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:16.846845   77373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:16.848425   77373 out.go:235]   - Booting up control plane ...
	I1011 22:30:16.848538   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:16.848673   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:16.848787   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:16.848925   77373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:16.849039   77373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:16.849076   77373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:16.849210   77373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:30:16.849351   77373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:30:16.849437   77373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.393174ms
	I1011 22:30:16.849498   77373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:30:16.849550   77373 kubeadm.go:310] [api-check] The API server is healthy after 5.001429588s
	I1011 22:30:16.849648   77373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:30:16.849781   77373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:30:16.849869   77373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:30:16.850052   77373 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-390487 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:30:16.850110   77373 kubeadm.go:310] [bootstrap-token] Using token: fihl2i.d50idwk2axnrw24u
	I1011 22:30:16.851665   77373 out.go:235]   - Configuring RBAC rules ...
	I1011 22:30:16.851802   77373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:30:16.851885   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:30:16.852036   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:30:16.852185   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:30:16.852323   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:30:16.852402   77373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:30:16.852499   77373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:30:16.852541   77373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:30:16.852580   77373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:30:16.852586   77373 kubeadm.go:310] 
	I1011 22:30:16.852634   77373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:30:16.852640   77373 kubeadm.go:310] 
	I1011 22:30:16.852705   77373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:30:16.852711   77373 kubeadm.go:310] 
	I1011 22:30:16.852732   77373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:30:16.852805   77373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:30:16.852878   77373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:30:16.852891   77373 kubeadm.go:310] 
	I1011 22:30:16.852990   77373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:30:16.853005   77373 kubeadm.go:310] 
	I1011 22:30:16.853073   77373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:30:16.853086   77373 kubeadm.go:310] 
	I1011 22:30:16.853162   77373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:30:16.853282   77373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:30:16.853341   77373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:30:16.853347   77373 kubeadm.go:310] 
	I1011 22:30:16.853424   77373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:30:16.853529   77373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:30:16.853540   77373 kubeadm.go:310] 
	I1011 22:30:16.853643   77373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.853789   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:30:16.853824   77373 kubeadm.go:310] 	--control-plane 
	I1011 22:30:16.853832   77373 kubeadm.go:310] 
	I1011 22:30:16.853954   77373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:30:16.853964   77373 kubeadm.go:310] 
	I1011 22:30:16.854083   77373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.854248   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:30:16.854264   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:30:16.854273   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:30:16.855848   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:30:16.857089   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:30:16.868823   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:30:16.895913   77373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:30:16.896017   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:16.896028   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-390487 minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=no-preload-390487 minikube.k8s.io/primary=true
	I1011 22:30:16.918531   77373 ops.go:34] apiserver oom_adj: -16
	I1011 22:30:17.097050   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:17.598029   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:18.098092   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:18.597526   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.098157   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.597575   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.097754   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.597957   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.097558   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.213123   77373 kubeadm.go:1113] duration metric: took 4.317171517s to wait for elevateKubeSystemPrivileges
	I1011 22:30:21.213168   77373 kubeadm.go:394] duration metric: took 4m58.664336163s to StartCluster
	I1011 22:30:21.213191   77373 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.213283   77373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:30:21.215630   77373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.215852   77373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:30:21.215989   77373 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:30:21.216063   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:30:21.216088   77373 addons.go:69] Setting storage-provisioner=true in profile "no-preload-390487"
	I1011 22:30:21.216109   77373 addons.go:234] Setting addon storage-provisioner=true in "no-preload-390487"
	I1011 22:30:21.216102   77373 addons.go:69] Setting default-storageclass=true in profile "no-preload-390487"
	W1011 22:30:21.216118   77373 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:30:21.216128   77373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-390487"
	I1011 22:30:21.216131   77373 addons.go:69] Setting metrics-server=true in profile "no-preload-390487"
	I1011 22:30:21.216171   77373 addons.go:234] Setting addon metrics-server=true in "no-preload-390487"
	W1011 22:30:21.216182   77373 addons.go:243] addon metrics-server should already be in state true
	I1011 22:30:21.216218   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216149   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216627   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216644   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216662   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216737   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.217280   77373 out.go:177] * Verifying Kubernetes components...
	I1011 22:30:21.218773   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:30:21.232485   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I1011 22:30:21.232801   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1011 22:30:21.233029   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233243   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233615   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233642   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233762   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233785   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233966   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234065   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234485   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234520   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.234611   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234669   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.235151   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1011 22:30:21.235614   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.236082   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.236106   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.236479   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.236777   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.240463   77373 addons.go:234] Setting addon default-storageclass=true in "no-preload-390487"
	W1011 22:30:21.240483   77373 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:30:21.240512   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.240874   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.240916   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.250949   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I1011 22:30:21.251469   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.251958   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.251983   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.252397   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.252586   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.253093   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1011 22:30:21.253443   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.253949   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.253966   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.254413   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.254479   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.254605   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.256241   77373 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:30:21.256246   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.257646   77373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:30:21.257651   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:30:21.257712   77373 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:30:21.257736   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.258740   77373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.258761   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:30:21.258779   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.259764   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1011 22:30:21.260129   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.260673   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.260697   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.261024   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.261691   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.261902   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.261949   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.262376   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.262401   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262655   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262698   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.262901   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263233   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.263339   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.263345   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263511   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.263523   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.263700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263807   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263942   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.302779   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1011 22:30:21.303319   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.303864   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.303888   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.304289   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.304516   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.306544   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.306810   77373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.306829   77373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:30:21.306852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.309788   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310242   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.310268   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310466   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.310646   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.310786   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.310911   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.439567   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:30:21.477421   77373 node_ready.go:35] waiting up to 6m0s for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.539701   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.544312   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.548001   77373 node_ready.go:49] node "no-preload-390487" has status "Ready":"True"
	I1011 22:30:21.548022   77373 node_ready.go:38] duration metric: took 70.568638ms for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.548032   77373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:21.576393   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:21.585171   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:30:21.585197   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:30:21.681671   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:30:21.681698   77373 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:30:21.725963   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:21.725988   77373 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:30:21.759564   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:22.490072   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490099   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490219   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490236   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490470   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490494   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490504   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490512   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490596   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490596   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490627   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490642   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490653   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490883   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490899   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490922   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490981   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490996   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.491008   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.509939   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.509972   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.510355   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.510371   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.510421   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:23.029621   77373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.270011552s)
	I1011 22:30:23.029675   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.029691   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.029972   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.029989   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.029999   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.030008   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.030228   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.030242   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.030253   77373 addons.go:475] Verifying addon metrics-server=true in "no-preload-390487"
	I1011 22:30:23.031821   77373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1011 22:30:23.033206   77373 addons.go:510] duration metric: took 1.817229636s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1011 22:30:23.583317   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.583341   77373 pod_ready.go:82] duration metric: took 2.006915507s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.583350   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588077   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.588094   77373 pod_ready.go:82] duration metric: took 4.738751ms for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588103   77373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592411   77373 pod_ready.go:93] pod "etcd-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.592429   77373 pod_ready.go:82] duration metric: took 4.320594ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592437   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:25.599226   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:28.107173   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:29.598395   77373 pod_ready.go:93] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.598422   77373 pod_ready.go:82] duration metric: took 6.005976584s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.598438   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603104   77373 pod_ready.go:93] pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.603123   77373 pod_ready.go:82] duration metric: took 4.67757ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603133   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606558   77373 pod_ready.go:93] pod "kube-proxy-4g8nw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.606574   77373 pod_ready.go:82] duration metric: took 3.433207ms for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606582   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610559   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.610575   77373 pod_ready.go:82] duration metric: took 3.985639ms for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610582   77373 pod_ready.go:39] duration metric: took 8.062539556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:29.610598   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:30:29.610667   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:30:29.625884   77373 api_server.go:72] duration metric: took 8.409998013s to wait for apiserver process to appear ...
	I1011 22:30:29.625906   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:30:29.625925   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:30:29.629905   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:30:29.631557   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:30:29.631575   77373 api_server.go:131] duration metric: took 5.661997ms to wait for apiserver health ...
	I1011 22:30:29.631583   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:30:29.637936   77373 system_pods.go:59] 9 kube-system pods found
	I1011 22:30:29.637963   77373 system_pods.go:61] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.637970   77373 system_pods.go:61] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.637974   77373 system_pods.go:61] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.637979   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.637984   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.637989   77373 system_pods.go:61] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.637997   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.638010   77373 system_pods.go:61] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.638018   77373 system_pods.go:61] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.638027   77373 system_pods.go:74] duration metric: took 6.437989ms to wait for pod list to return data ...
	I1011 22:30:29.638034   77373 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:30:29.640483   77373 default_sa.go:45] found service account: "default"
	I1011 22:30:29.640499   77373 default_sa.go:55] duration metric: took 2.455351ms for default service account to be created ...
	I1011 22:30:29.640508   77373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:30:29.800014   77373 system_pods.go:86] 9 kube-system pods found
	I1011 22:30:29.800043   77373 system_pods.go:89] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.800049   77373 system_pods.go:89] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.800053   77373 system_pods.go:89] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.800057   77373 system_pods.go:89] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.800060   77373 system_pods.go:89] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.800064   77373 system_pods.go:89] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.800069   77373 system_pods.go:89] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.800074   77373 system_pods.go:89] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.800078   77373 system_pods.go:89] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.800086   77373 system_pods.go:126] duration metric: took 159.572896ms to wait for k8s-apps to be running ...
	I1011 22:30:29.800093   77373 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:30:29.800138   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:29.815064   77373 system_svc.go:56] duration metric: took 14.962996ms WaitForService to wait for kubelet
	I1011 22:30:29.815090   77373 kubeadm.go:582] duration metric: took 8.599206932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:30:29.815106   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:30:29.997185   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:30:29.997214   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:30:29.997224   77373 node_conditions.go:105] duration metric: took 182.114064ms to run NodePressure ...
	I1011 22:30:29.997235   77373 start.go:241] waiting for startup goroutines ...
	I1011 22:30:29.997242   77373 start.go:246] waiting for cluster config update ...
	I1011 22:30:29.997254   77373 start.go:255] writing updated cluster config ...
	I1011 22:30:29.997529   77373 ssh_runner.go:195] Run: rm -f paused
	I1011 22:30:30.044917   77373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:30:30.046918   77373 out.go:177] * Done! kubectl is now configured to use "no-preload-390487" cluster and "default" namespace by default
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 
	
	
	==> CRI-O <==
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.852509961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728685972852489948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b0bbcc0-ff10-4769-b5b6-62ee45d1995e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.853094855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2a0b40c-a9ff-48d0-8c86-719bdfd915f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.853167268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2a0b40c-a9ff-48d0-8c86-719bdfd915f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.853204571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2a0b40c-a9ff-48d0-8c86-719bdfd915f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.885591572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f460aae-1181-4ede-aa5f-204ec29de04b name=/runtime.v1.RuntimeService/Version
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.885686513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f460aae-1181-4ede-aa5f-204ec29de04b name=/runtime.v1.RuntimeService/Version
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.887907534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e91ad5c4-ee63-4756-84a5-1f256d29f487 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.888276411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728685972888253263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e91ad5c4-ee63-4756-84a5-1f256d29f487 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.888925469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b70be6db-def1-4b51-b26b-107f0d3bf119 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.888994070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b70be6db-def1-4b51-b26b-107f0d3bf119 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.889025199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b70be6db-def1-4b51-b26b-107f0d3bf119 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.922893162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b4a5d0f-3640-4000-9eac-b41bbc66617d name=/runtime.v1.RuntimeService/Version
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.922961224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b4a5d0f-3640-4000-9eac-b41bbc66617d name=/runtime.v1.RuntimeService/Version
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.924241016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c07f5e04-2c65-4bf2-9ba1-a6d92ce5a811 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.924811356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728685972924709202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c07f5e04-2c65-4bf2-9ba1-a6d92ce5a811 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.925262771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=203bba88-d0c5-4c2d-81bc-798d402967a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.925323633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=203bba88-d0c5-4c2d-81bc-798d402967a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.925368088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=203bba88-d0c5-4c2d-81bc-798d402967a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.961315690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4050c326-6c36-4518-b77e-5ffb6ada609a name=/runtime.v1.RuntimeService/Version
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.961381018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4050c326-6c36-4518-b77e-5ffb6ada609a name=/runtime.v1.RuntimeService/Version
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.962450574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=345d3a56-5af5-4b18-8556-1f238c71b9a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.962853155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728685972962833336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=345d3a56-5af5-4b18-8556-1f238c71b9a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.963257656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75a70ad3-cebd-4c8e-a157-7452e56f0821 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.963301015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75a70ad3-cebd-4c8e-a157-7452e56f0821 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:32:52 old-k8s-version-323416 crio[634]: time="2024-10-11 22:32:52.963331520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=75a70ad3-cebd-4c8e-a157-7452e56f0821 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct11 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050928] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.110729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.580711] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.636937] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.157348] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.054654] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064708] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.165294] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.159768] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.272781] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.674030] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.066044] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.222707] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[Oct11 22:25] kauditd_printk_skb: 46 callbacks suppressed
	[Oct11 22:28] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Oct11 22:30] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.064434] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:32:53 up 8 min,  0 users,  load average: 0.00, 0.05, 0.01
	Linux old-k8s-version-323416 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005e66f0)
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a9bef0, 0x4f0ac20, 0xc000547950, 0x1, 0xc0001000c0)
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024ed20, 0xc0001000c0)
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bda1f0, 0xc000961ea0)
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5541]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 11 22:32:50 old-k8s-version-323416 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 11 22:32:50 old-k8s-version-323416 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 11 22:32:50 old-k8s-version-323416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 11 22:32:50 old-k8s-version-323416 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 11 22:32:50 old-k8s-version-323416 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5597]: I1011 22:32:50.895517    5597 server.go:416] Version: v1.20.0
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5597]: I1011 22:32:50.895856    5597 server.go:837] Client rotation is on, will bootstrap in background
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5597]: I1011 22:32:50.898763    5597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5597]: W1011 22:32:50.900483    5597 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 11 22:32:50 old-k8s-version-323416 kubelet[5597]: I1011 22:32:50.901351    5597 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (226.435172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-323416" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (736.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1011 22:29:23.067644   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223942 -n embed-certs-223942
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-11 22:38:18.570835889 +0000 UTC m=+6018.585193394
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-223942 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-223942 logs -n 25: (1.97795779s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo cat                              | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:20:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:39.118864   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:20:45.198865   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:48.270849   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:54.350871   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:57.422868   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:03.502801   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:06.574950   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:12.654900   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:15.726940   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:21.806892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:24.878947   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:30.958903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:34.030961   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:40.110909   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:43.182869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:49.262857   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:52.334903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:58.414892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:01.486914   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:07.566885   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:10.638888   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:16.718908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:19.790874   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:25.870893   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:28.942886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:35.022875   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:38.094889   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:44.174898   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:47.246907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:53.326869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:56.398883   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:02.482839   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:05.550858   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:11.630908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:14.702895   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:20.782925   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:23.854907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:29.934886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:33.006820   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:39.086906   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:42.158938   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:45.162974   77526 start.go:364] duration metric: took 4m27.722613931s to acquireMachinesLock for "embed-certs-223942"
	I1011 22:23:45.163058   77526 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:23:45.163081   77526 fix.go:54] fixHost starting: 
	I1011 22:23:45.163410   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:23:45.163459   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:23:45.178675   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1011 22:23:45.179157   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:23:45.179600   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:23:45.179620   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:23:45.179959   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:23:45.180200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:23:45.180348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:23:45.182134   77526 fix.go:112] recreateIfNeeded on embed-certs-223942: state=Stopped err=<nil>
	I1011 22:23:45.182159   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	W1011 22:23:45.182305   77526 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:23:45.184160   77526 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223942" ...
	I1011 22:23:45.185640   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Start
	I1011 22:23:45.185844   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring networks are active...
	I1011 22:23:45.186700   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network default is active
	I1011 22:23:45.187125   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network mk-embed-certs-223942 is active
	I1011 22:23:45.187499   77526 main.go:141] libmachine: (embed-certs-223942) Getting domain xml...
	I1011 22:23:45.188220   77526 main.go:141] libmachine: (embed-certs-223942) Creating domain...
	I1011 22:23:46.400681   77526 main.go:141] libmachine: (embed-certs-223942) Waiting to get IP...
	I1011 22:23:46.401694   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.402146   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.402226   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.402142   78768 retry.go:31] will retry after 262.164449ms: waiting for machine to come up
	I1011 22:23:46.665716   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.666177   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.666204   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.666139   78768 retry.go:31] will retry after 264.99316ms: waiting for machine to come up
	I1011 22:23:46.932771   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.933128   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.933167   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.933084   78768 retry.go:31] will retry after 388.243159ms: waiting for machine to come up
	I1011 22:23:47.322648   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.323103   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.323165   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.323047   78768 retry.go:31] will retry after 374.999199ms: waiting for machine to come up
	I1011 22:23:45.160618   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:23:45.160654   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.160935   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:23:45.160960   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.161145   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:23:45.162838   77373 machine.go:96] duration metric: took 4m37.426000052s to provisionDockerMachine
	I1011 22:23:45.162876   77373 fix.go:56] duration metric: took 4m37.446804874s for fixHost
	I1011 22:23:45.162886   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 4m37.446840276s
	W1011 22:23:45.162906   77373 start.go:714] error starting host: provision: host is not running
	W1011 22:23:45.163008   77373 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1011 22:23:45.163018   77373 start.go:729] Will try again in 5 seconds ...
	I1011 22:23:47.699684   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.700088   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.700117   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.700031   78768 retry.go:31] will retry after 589.703952ms: waiting for machine to come up
	I1011 22:23:48.291928   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.292398   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.292422   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.292351   78768 retry.go:31] will retry after 671.971303ms: waiting for machine to come up
	I1011 22:23:48.966357   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.966772   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.966797   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.966738   78768 retry.go:31] will retry after 848.2726ms: waiting for machine to come up
	I1011 22:23:49.816735   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:49.817155   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:49.817181   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:49.817116   78768 retry.go:31] will retry after 941.163438ms: waiting for machine to come up
	I1011 22:23:50.759625   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:50.760052   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:50.760095   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:50.759996   78768 retry.go:31] will retry after 1.225047114s: waiting for machine to come up
	I1011 22:23:51.987349   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:51.987788   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:51.987817   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:51.987737   78768 retry.go:31] will retry after 2.184212352s: waiting for machine to come up
	I1011 22:23:50.165493   77373 start.go:360] acquireMachinesLock for no-preload-390487: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:23:54.173125   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:54.173564   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:54.173595   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:54.173503   78768 retry.go:31] will retry after 2.000096312s: waiting for machine to come up
	I1011 22:23:56.176004   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:56.176458   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:56.176488   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:56.176403   78768 retry.go:31] will retry after 3.062345768s: waiting for machine to come up
	I1011 22:23:59.239982   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:59.240426   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:59.240452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:59.240386   78768 retry.go:31] will retry after 4.019746049s: waiting for machine to come up
	I1011 22:24:04.643399   77741 start.go:364] duration metric: took 4m21.087318573s to acquireMachinesLock for "default-k8s-diff-port-070708"
	I1011 22:24:04.643463   77741 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:04.643471   77741 fix.go:54] fixHost starting: 
	I1011 22:24:04.643903   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:04.643950   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:04.660647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1011 22:24:04.661106   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:04.661603   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:24:04.661627   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:04.661966   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:04.662148   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:04.662392   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:24:04.664004   77741 fix.go:112] recreateIfNeeded on default-k8s-diff-port-070708: state=Stopped err=<nil>
	I1011 22:24:04.664048   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	W1011 22:24:04.664205   77741 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:04.666462   77741 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-070708" ...
	I1011 22:24:03.263908   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264434   77526 main.go:141] libmachine: (embed-certs-223942) Found IP for machine: 192.168.72.238
	I1011 22:24:03.264467   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has current primary IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264476   77526 main.go:141] libmachine: (embed-certs-223942) Reserving static IP address...
	I1011 22:24:03.264932   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.264964   77526 main.go:141] libmachine: (embed-certs-223942) Reserved static IP address: 192.168.72.238
	I1011 22:24:03.264984   77526 main.go:141] libmachine: (embed-certs-223942) DBG | skip adding static IP to network mk-embed-certs-223942 - found existing host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"}
	I1011 22:24:03.264995   77526 main.go:141] libmachine: (embed-certs-223942) Waiting for SSH to be available...
	I1011 22:24:03.265018   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Getting to WaitForSSH function...
	I1011 22:24:03.267171   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267556   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.267594   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267682   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH client type: external
	I1011 22:24:03.267720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa (-rw-------)
	I1011 22:24:03.267747   77526 main.go:141] libmachine: (embed-certs-223942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:03.267760   77526 main.go:141] libmachine: (embed-certs-223942) DBG | About to run SSH command:
	I1011 22:24:03.267767   77526 main.go:141] libmachine: (embed-certs-223942) DBG | exit 0
	I1011 22:24:03.390641   77526 main.go:141] libmachine: (embed-certs-223942) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:03.390996   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetConfigRaw
	I1011 22:24:03.391600   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.393909   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394224   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.394267   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394510   77526 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:24:03.394735   77526 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:03.394754   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:03.394941   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.396974   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397280   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.397298   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397414   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.397577   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397724   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397856   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.398095   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.398276   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.398285   77526 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:03.503029   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:03.503063   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503282   77526 buildroot.go:166] provisioning hostname "embed-certs-223942"
	I1011 22:24:03.503301   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503503   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.505943   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506300   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.506325   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506444   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.506595   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506769   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506899   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.507087   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.507247   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.507261   77526 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223942 && echo "embed-certs-223942" | sudo tee /etc/hostname
	I1011 22:24:03.626937   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223942
	
	I1011 22:24:03.626970   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.629752   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630038   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.630067   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630194   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.630370   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630665   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.630805   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.630988   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.631011   77526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223942/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:03.744196   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:03.744224   77526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:03.744247   77526 buildroot.go:174] setting up certificates
	I1011 22:24:03.744258   77526 provision.go:84] configureAuth start
	I1011 22:24:03.744270   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.744535   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.747114   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.747479   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747619   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.750238   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750626   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.750662   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750801   77526 provision.go:143] copyHostCerts
	I1011 22:24:03.750867   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:03.750890   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:03.750970   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:03.751094   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:03.751108   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:03.751146   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:03.751246   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:03.751257   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:03.751288   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:03.751360   77526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223942 san=[127.0.0.1 192.168.72.238 embed-certs-223942 localhost minikube]
	I1011 22:24:04.039983   77526 provision.go:177] copyRemoteCerts
	I1011 22:24:04.040046   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:04.040072   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.042846   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043130   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.043151   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043339   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.043530   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.043689   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.043836   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.124533   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:04.148503   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:24:04.172199   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:04.195175   77526 provision.go:87] duration metric: took 450.888581ms to configureAuth
	I1011 22:24:04.195203   77526 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:04.195381   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:04.195446   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.197839   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198189   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.198269   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.198561   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198730   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198875   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.199041   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.199217   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.199237   77526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:04.411621   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:04.411653   77526 machine.go:96] duration metric: took 1.016905055s to provisionDockerMachine
	I1011 22:24:04.411667   77526 start.go:293] postStartSetup for "embed-certs-223942" (driver="kvm2")
	I1011 22:24:04.411680   77526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:04.411699   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.411977   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:04.412003   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.414381   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414679   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.414722   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414835   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.415010   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.415144   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.415266   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.496916   77526 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:04.500935   77526 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:04.500956   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:04.501023   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:04.501115   77526 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:04.501222   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:04.510266   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:04.537636   77526 start.go:296] duration metric: took 125.956397ms for postStartSetup
	I1011 22:24:04.537678   77526 fix.go:56] duration metric: took 19.374596283s for fixHost
	I1011 22:24:04.537698   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.540344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540719   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.540742   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540838   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.541012   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541160   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541316   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.541474   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.541648   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.541659   77526 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:04.643243   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685444.617606783
	
	I1011 22:24:04.643266   77526 fix.go:216] guest clock: 1728685444.617606783
	I1011 22:24:04.643273   77526 fix.go:229] Guest: 2024-10-11 22:24:04.617606783 +0000 UTC Remote: 2024-10-11 22:24:04.537682618 +0000 UTC m=+287.234553168 (delta=79.924165ms)
	I1011 22:24:04.643312   77526 fix.go:200] guest clock delta is within tolerance: 79.924165ms
	I1011 22:24:04.643320   77526 start.go:83] releasing machines lock for "embed-certs-223942", held for 19.480305529s
	I1011 22:24:04.643344   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.643569   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:04.646344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646733   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.646766   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646918   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647366   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647519   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647644   77526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:04.647693   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.647723   77526 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:04.647748   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.649992   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650329   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650354   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650378   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650509   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.650676   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.650750   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650773   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650857   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.650959   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.651027   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.651081   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.651200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.651313   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.756500   77526 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:04.762420   77526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:04.901155   77526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:04.908234   77526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:04.908304   77526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:04.929972   77526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:04.929999   77526 start.go:495] detecting cgroup driver to use...
	I1011 22:24:04.930069   77526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:04.946899   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:04.960670   77526 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:04.960739   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:04.973981   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:04.987444   77526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:05.103114   77526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:05.251587   77526 docker.go:233] disabling docker service ...
	I1011 22:24:05.251662   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:05.266087   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:05.279209   77526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:05.431467   77526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:05.571151   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:05.584813   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:05.603563   77526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:05.603632   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.614924   77526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:05.614979   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.627625   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.638259   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.651521   77526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:05.663937   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.674307   77526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.696935   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.707464   77526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:05.717338   77526 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:05.717416   77526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:05.737811   77526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:05.749453   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:05.888144   77526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:05.984321   77526 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:05.984382   77526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:05.989389   77526 start.go:563] Will wait 60s for crictl version
	I1011 22:24:05.989447   77526 ssh_runner.go:195] Run: which crictl
	I1011 22:24:05.993333   77526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:06.033281   77526 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:06.033366   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.062164   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.092927   77526 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:06.094094   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:06.097442   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.097893   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:06.097941   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.098179   77526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:06.102566   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:06.116183   77526 kubeadm.go:883] updating cluster {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:06.116297   77526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:06.116347   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:06.164193   77526 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:06.164272   77526 ssh_runner.go:195] Run: which lz4
	I1011 22:24:06.168557   77526 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:06.173131   77526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:06.173165   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:04.667909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Start
	I1011 22:24:04.668056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring networks are active...
	I1011 22:24:04.668688   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network default is active
	I1011 22:24:04.668985   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network mk-default-k8s-diff-port-070708 is active
	I1011 22:24:04.669312   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Getting domain xml...
	I1011 22:24:04.669964   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Creating domain...
	I1011 22:24:05.931094   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting to get IP...
	I1011 22:24:05.932142   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932635   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932711   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:05.932622   78901 retry.go:31] will retry after 199.659438ms: waiting for machine to come up
	I1011 22:24:06.134036   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134479   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134504   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.134439   78901 retry.go:31] will retry after 379.083732ms: waiting for machine to come up
	I1011 22:24:06.515118   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515656   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515686   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.515599   78901 retry.go:31] will retry after 302.733712ms: waiting for machine to come up
	I1011 22:24:06.820188   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820629   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820657   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.820579   78901 retry.go:31] will retry after 466.059846ms: waiting for machine to come up
	I1011 22:24:07.288837   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289371   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.289302   78901 retry.go:31] will retry after 551.760501ms: waiting for machine to come up
	I1011 22:24:07.843026   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843561   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843590   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.843517   78901 retry.go:31] will retry after 626.896356ms: waiting for machine to come up
	I1011 22:24:07.621882   77526 crio.go:462] duration metric: took 1.453355137s to copy over tarball
	I1011 22:24:07.621973   77526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:09.732789   77526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110786947s)
	I1011 22:24:09.732823   77526 crio.go:469] duration metric: took 2.110914695s to extract the tarball
	I1011 22:24:09.732831   77526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:09.768649   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:09.811856   77526 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:09.811881   77526 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:09.811890   77526 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I1011 22:24:09.811991   77526 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:09.812087   77526 ssh_runner.go:195] Run: crio config
	I1011 22:24:09.857847   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:09.857869   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:09.857877   77526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:09.857896   77526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223942 NodeName:embed-certs-223942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:09.858025   77526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:09.858082   77526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:09.868276   77526 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:09.868346   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:09.877682   77526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1011 22:24:09.894551   77526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:09.911181   77526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1011 22:24:09.927972   77526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:09.931799   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:09.943650   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:10.071890   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:10.089627   77526 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942 for IP: 192.168.72.238
	I1011 22:24:10.089658   77526 certs.go:194] generating shared ca certs ...
	I1011 22:24:10.089680   77526 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:10.089851   77526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:10.089905   77526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:10.089916   77526 certs.go:256] generating profile certs ...
	I1011 22:24:10.090038   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/client.key
	I1011 22:24:10.090121   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key.0dabc30d
	I1011 22:24:10.090163   77526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key
	I1011 22:24:10.090323   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:10.090354   77526 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:10.090364   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:10.090392   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:10.090415   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:10.090438   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:10.090476   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:10.091225   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:10.117879   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:10.169586   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:10.210385   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:10.245240   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:24:10.274354   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:10.299943   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:10.324265   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:10.347352   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:10.370252   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:10.393715   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:10.420103   77526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:10.436668   77526 ssh_runner.go:195] Run: openssl version
	I1011 22:24:10.442525   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:10.453055   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457461   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457520   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.463121   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:10.473623   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:10.483653   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488022   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488075   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.493553   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:10.503833   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:10.514171   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518935   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518983   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.524479   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:10.534942   77526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:10.539385   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:10.545178   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:10.550886   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:10.556533   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:10.562024   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:10.567514   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:10.573018   77526 kubeadm.go:392] StartCluster: {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:10.573136   77526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:10.573206   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.609216   77526 cri.go:89] found id: ""
	I1011 22:24:10.609291   77526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:10.619945   77526 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:10.619976   77526 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:10.620024   77526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:10.629748   77526 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:10.631292   77526 kubeconfig.go:125] found "embed-certs-223942" server: "https://192.168.72.238:8443"
	I1011 22:24:10.634516   77526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:10.644773   77526 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.238
	I1011 22:24:10.644805   77526 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:10.644821   77526 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:10.644874   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.680074   77526 cri.go:89] found id: ""
	I1011 22:24:10.680146   77526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:10.696118   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:10.705765   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:10.705789   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:10.705845   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:10.714771   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:10.714837   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:10.724255   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:10.733433   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:10.733490   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:10.742649   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.751287   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:10.751350   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.760572   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:10.769447   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:10.769517   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:10.778829   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:10.788208   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:10.900288   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.733461   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.929225   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.001383   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.093971   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:12.094053   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:08.471765   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472154   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472178   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:08.472099   78901 retry.go:31] will retry after 1.132732814s: waiting for machine to come up
	I1011 22:24:09.606499   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607030   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:09.606975   78901 retry.go:31] will retry after 1.289031778s: waiting for machine to come up
	I1011 22:24:10.897474   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.897980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.898005   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:10.897925   78901 retry.go:31] will retry after 1.601197893s: waiting for machine to come up
	I1011 22:24:12.500563   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501072   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501100   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:12.501018   78901 retry.go:31] will retry after 1.772496409s: waiting for machine to come up
	I1011 22:24:12.594492   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.094823   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.594502   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.095004   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.109230   77526 api_server.go:72] duration metric: took 2.015258789s to wait for apiserver process to appear ...
	I1011 22:24:14.109265   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:14.109291   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.439696   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.439731   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.439747   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.515797   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.515834   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.610033   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.620048   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:16.620093   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.109593   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.116698   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.116729   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.609486   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.628000   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.628031   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:18.109663   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:18.115996   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:24:18.121780   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:18.121806   77526 api_server.go:131] duration metric: took 4.012533784s to wait for apiserver health ...
	I1011 22:24:18.121816   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:18.121823   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:18.123838   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:14.275892   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276364   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:14.276305   78901 retry.go:31] will retry after 2.71082021s: waiting for machine to come up
	I1011 22:24:16.989033   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989560   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989591   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:16.989521   78901 retry.go:31] will retry after 2.502509628s: waiting for machine to come up
	I1011 22:24:18.125325   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:18.137257   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:18.154806   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:18.164291   77526 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:18.164318   77526 system_pods.go:61] "coredns-7c65d6cfc9-w8zgx" [4a8fab25-6b1a-424f-982c-2def533eb1ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:18.164325   77526 system_pods.go:61] "etcd-embed-certs-223942" [95c77be2-4ed2-45b5-b1ad-abbd3bc6de78] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:18.164332   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [51fd81a8-25e1-4d2f-b6dc-42e1b277de54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:18.164338   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [17eda746-891b-44aa-800c-fabd818db753] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:18.164357   77526 system_pods.go:61] "kube-proxy-xz284" [a24b20d5-45dd-476c-8c91-07fd5cea511b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:18.164368   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [91bf2256-7d6e-4831-aab5-d59c4f801fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:18.164382   77526 system_pods.go:61] "metrics-server-6867b74b74-9xr4k" [fc1a267e-3cb7-40f6-8908-5b304f8f5b92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:18.164398   77526 system_pods.go:61] "storage-provisioner" [77ed79d9-66ba-4262-a972-e23ce8d1878c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:18.164412   77526 system_pods.go:74] duration metric: took 9.584328ms to wait for pod list to return data ...
	I1011 22:24:18.164421   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:18.167630   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:18.167650   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:18.167660   77526 node_conditions.go:105] duration metric: took 3.235822ms to run NodePressure ...
	I1011 22:24:18.167675   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:18.453597   77526 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457919   77526 kubeadm.go:739] kubelet initialised
	I1011 22:24:18.457937   77526 kubeadm.go:740] duration metric: took 4.320725ms waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457944   77526 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:18.462432   77526 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.468402   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468426   77526 pod_ready.go:82] duration metric: took 5.974992ms for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.468435   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468441   77526 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.475031   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475048   77526 pod_ready.go:82] duration metric: took 6.600211ms for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.475056   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475061   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.479729   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479748   77526 pod_ready.go:82] duration metric: took 4.679509ms for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.479756   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479762   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:20.487624   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:19.494990   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495353   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495384   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:19.495311   78901 retry.go:31] will retry after 2.761894966s: waiting for machine to come up
	I1011 22:24:22.260471   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has current primary IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260931   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Found IP for machine: 192.168.39.162
	I1011 22:24:22.260960   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserving static IP address...
	I1011 22:24:22.261363   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserved static IP address: 192.168.39.162
	I1011 22:24:22.261401   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.261416   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for SSH to be available...
	I1011 22:24:22.261457   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | skip adding static IP to network mk-default-k8s-diff-port-070708 - found existing host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"}
	I1011 22:24:22.261493   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Getting to WaitForSSH function...
	I1011 22:24:22.263356   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263736   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.263769   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263912   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH client type: external
	I1011 22:24:22.263936   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa (-rw-------)
	I1011 22:24:22.263959   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:22.263975   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | About to run SSH command:
	I1011 22:24:22.263991   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | exit 0
	I1011 22:24:22.391349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:22.391744   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetConfigRaw
	I1011 22:24:22.392361   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.394582   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.394953   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.394987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.395205   77741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:24:22.395391   77741 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:22.395408   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:22.395620   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.397851   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398185   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.398215   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398339   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.398517   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398671   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398810   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.398947   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.399226   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.399243   77741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:22.506891   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:22.506929   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507220   77741 buildroot.go:166] provisioning hostname "default-k8s-diff-port-070708"
	I1011 22:24:22.507252   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507437   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.510300   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510694   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.510728   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510830   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.511016   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511165   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511449   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.511588   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.511783   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.511800   77741 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-070708 && echo "default-k8s-diff-port-070708" | sudo tee /etc/hostname
	I1011 22:24:22.632639   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-070708
	
	I1011 22:24:22.632673   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.635224   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635536   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.635570   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.635881   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636018   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636166   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.636312   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.636503   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.636521   77741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-070708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-070708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-070708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:22.751402   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:22.751434   77741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:22.751490   77741 buildroot.go:174] setting up certificates
	I1011 22:24:22.751505   77741 provision.go:84] configureAuth start
	I1011 22:24:22.751522   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.751753   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.754256   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754611   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.754661   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.756857   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757175   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.757207   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757327   77741 provision.go:143] copyHostCerts
	I1011 22:24:22.757384   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:22.757405   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:22.757479   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:22.757577   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:22.757586   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:22.757607   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:22.757660   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:22.757667   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:22.757683   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:22.757738   77741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-070708 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-070708 localhost minikube]
	I1011 22:24:23.136674   77741 provision.go:177] copyRemoteCerts
	I1011 22:24:23.136726   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:23.136751   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.139576   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.139909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.139939   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.140104   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.140302   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.140446   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.140553   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.224552   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:23.248389   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1011 22:24:23.271533   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:23.294727   77741 provision.go:87] duration metric: took 543.206381ms to configureAuth
	I1011 22:24:23.294757   77741 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:23.295005   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:23.295092   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.297776   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298066   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.298102   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298225   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.298447   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298609   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298747   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.298880   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.299054   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.299068   77741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.522759   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:23.522787   77741 machine.go:96] duration metric: took 1.127384391s to provisionDockerMachine
	I1011 22:24:23.522801   77741 start.go:293] postStartSetup for "default-k8s-diff-port-070708" (driver="kvm2")
	I1011 22:24:23.522814   77741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:23.522834   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.523149   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:23.523186   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.526415   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.526905   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.526927   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.527101   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.527304   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.527442   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.527548   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.609520   77741 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:23.614158   77741 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:23.614183   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:23.614257   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:23.614349   77741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:23.614460   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:23.623839   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:23.649574   77741 start.go:296] duration metric: took 126.758615ms for postStartSetup
	I1011 22:24:23.649619   77741 fix.go:56] duration metric: took 19.006146927s for fixHost
	I1011 22:24:23.649643   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.652832   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653204   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.653234   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653439   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.653633   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653815   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.654158   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.654348   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.654362   77741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:23.763396   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685463.735816087
	
	I1011 22:24:23.763417   77741 fix.go:216] guest clock: 1728685463.735816087
	I1011 22:24:23.763435   77741 fix.go:229] Guest: 2024-10-11 22:24:23.735816087 +0000 UTC Remote: 2024-10-11 22:24:23.649624165 +0000 UTC m=+280.235201903 (delta=86.191922ms)
	I1011 22:24:23.763454   77741 fix.go:200] guest clock delta is within tolerance: 86.191922ms
	I1011 22:24:23.763459   77741 start.go:83] releasing machines lock for "default-k8s-diff-port-070708", held for 19.120018362s
	I1011 22:24:23.763483   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.763750   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:23.766956   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767357   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.767399   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767553   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768140   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768301   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768388   77741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:23.768438   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.768496   77741 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:23.768518   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.771106   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771145   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771526   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771567   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771589   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771605   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771855   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.771901   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772053   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.772102   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.772171   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772276   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.883476   77741 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:23.889434   77741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:24.036410   77741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:24.042728   77741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:24.042805   77741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:24.059112   77741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:24.059137   77741 start.go:495] detecting cgroup driver to use...
	I1011 22:24:24.059201   77741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:24.075267   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:24.088163   77741 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:24.088228   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:24.106336   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:24.123084   77741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:24.242599   77741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:24.411075   77741 docker.go:233] disabling docker service ...
	I1011 22:24:24.411159   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:24.430632   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:24.447508   77741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:24.617156   77741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:24.761101   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:24.776604   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:24.799678   77741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:24.799738   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.811501   77741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:24.811576   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.822565   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.833103   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.843670   77741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:24.855800   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.868918   77741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.886996   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.897487   77741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:24.907215   77741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:24.907263   77741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:24.920391   77741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:24.931383   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:25.065929   77741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:25.164594   77741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:25.164663   77741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:25.169492   77741 start.go:563] Will wait 60s for crictl version
	I1011 22:24:25.169540   77741 ssh_runner.go:195] Run: which crictl
	I1011 22:24:25.173355   77741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:25.220778   77741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:25.220876   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.253354   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.287095   77741 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:22.488407   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:24.988742   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:25.288116   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:25.291001   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291345   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:25.291390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291579   77741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:25.295805   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:25.308821   77741 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:25.308959   77741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:25.309019   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:25.353205   77741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:25.353271   77741 ssh_runner.go:195] Run: which lz4
	I1011 22:24:25.357765   77741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:25.362126   77741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:25.362168   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:26.741249   77741 crio.go:462] duration metric: took 1.383506027s to copy over tarball
	I1011 22:24:26.741392   77741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:27.486887   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.486911   77526 pod_ready.go:82] duration metric: took 9.007140273s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.486926   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492698   77526 pod_ready.go:93] pod "kube-proxy-xz284" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.492717   77526 pod_ready.go:82] duration metric: took 5.784843ms for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492726   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:29.499666   77526 pod_ready.go:103] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:32.137260   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:32.137292   77526 pod_ready.go:82] duration metric: took 4.644558899s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:32.137307   77526 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:28.851248   77741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1098168s)
	I1011 22:24:28.851285   77741 crio.go:469] duration metric: took 2.109983801s to extract the tarball
	I1011 22:24:28.851294   77741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:28.888408   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:28.933361   77741 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:28.933384   77741 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:28.933391   77741 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.31.1 crio true true} ...
	I1011 22:24:28.933510   77741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-070708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:28.933589   77741 ssh_runner.go:195] Run: crio config
	I1011 22:24:28.982515   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:28.982541   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:28.982554   77741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:28.982582   77741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-070708 NodeName:default-k8s-diff-port-070708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:28.982781   77741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-070708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:28.982862   77741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:28.993780   77741 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:28.993846   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:29.005252   77741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1011 22:24:29.023922   77741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:29.042177   77741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 22:24:29.059529   77741 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:29.063600   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:29.078061   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:29.204249   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:29.221115   77741 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708 for IP: 192.168.39.162
	I1011 22:24:29.221141   77741 certs.go:194] generating shared ca certs ...
	I1011 22:24:29.221161   77741 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:29.221349   77741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:29.221402   77741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:29.221413   77741 certs.go:256] generating profile certs ...
	I1011 22:24:29.221493   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/client.key
	I1011 22:24:29.221568   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key.07f8f6d8
	I1011 22:24:29.221645   77741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key
	I1011 22:24:29.221767   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:29.221803   77741 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:29.221812   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:29.221832   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:29.221853   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:29.221872   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:29.221929   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:29.222760   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:29.262636   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:29.308886   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:29.348949   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:29.378795   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 22:24:29.426593   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:29.465414   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:29.491216   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:29.518262   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:29.542270   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:29.565664   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:29.588852   77741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:29.606630   77741 ssh_runner.go:195] Run: openssl version
	I1011 22:24:29.612594   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:29.623089   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627591   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627656   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.633544   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:29.644199   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:29.654783   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661009   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661061   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.668950   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:29.684757   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:29.700687   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705578   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705646   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.711533   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:29.722714   77741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:29.727419   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:29.733494   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:29.739565   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:29.745569   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:29.751428   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:29.757368   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:29.763272   77741 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:29.763379   77741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:29.763436   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.805191   77741 cri.go:89] found id: ""
	I1011 22:24:29.805263   77741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:29.819025   77741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:29.819049   77741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:29.819098   77741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:29.828470   77741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:29.829347   77741 kubeconfig.go:125] found "default-k8s-diff-port-070708" server: "https://192.168.39.162:8444"
	I1011 22:24:29.831385   77741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:29.840601   77741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1011 22:24:29.840630   77741 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:29.840640   77741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:29.840691   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.880123   77741 cri.go:89] found id: ""
	I1011 22:24:29.880199   77741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:29.897250   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:29.908273   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:29.908293   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:29.908340   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:24:29.917052   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:29.917110   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:29.926121   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:24:29.935494   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:29.935552   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:29.944951   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.953829   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:29.953890   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.963554   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:24:29.972917   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:29.972979   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:29.981962   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:29.990859   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.116668   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.856369   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.204973   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.261641   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.313332   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:31.313450   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:31.814503   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.313812   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.813821   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.833106   77741 api_server.go:72] duration metric: took 1.519770408s to wait for apiserver process to appear ...
	I1011 22:24:32.833142   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:32.833166   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.028524   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.028557   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.028573   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.035621   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.035651   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.334128   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.339051   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.339075   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:35.833305   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.838821   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.838851   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:36.333367   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:36.338371   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:24:36.344660   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:36.344684   77741 api_server.go:131] duration metric: took 3.511533712s to wait for apiserver health ...
	I1011 22:24:36.344694   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:36.344703   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:36.346229   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:34.148281   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:36.645574   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:36.347449   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:36.361278   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:36.384091   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:36.399422   77741 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:36.399482   77741 system_pods.go:61] "coredns-7c65d6cfc9-bpv5v" [76f03ec1-b826-412f-8bb2-fcd555185dd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:36.399503   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [5f021850-47af-442e-81f9-fccf153afb5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:36.399521   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [12777485-8206-495d-9223-06574b1410a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:36.399557   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [4261e9f7-6e66-44d3-abbb-6fd541e62c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:36.399567   77741 system_pods.go:61] "kube-proxy-hsjth" [7ba3e685-be57-4e46-ac49-279bd32ca049] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:36.399575   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [1d170237-0bbe-4832-b5d2-cea7a11d5aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:36.399585   77741 system_pods.go:61] "metrics-server-6867b74b74-l7xbw" [998853a5-4215-4f3d-baa5-84e8f6bb91ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:36.399599   77741 system_pods.go:61] "storage-provisioner" [f618ffde-9d3a-43fd-999a-3855ac5de5d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:36.399612   77741 system_pods.go:74] duration metric: took 15.498192ms to wait for pod list to return data ...
	I1011 22:24:36.399627   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:36.403628   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:36.403652   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:36.403663   77741 node_conditions.go:105] duration metric: took 4.030681ms to run NodePressure ...
	I1011 22:24:36.403677   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:36.705101   77741 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710495   77741 kubeadm.go:739] kubelet initialised
	I1011 22:24:36.710514   77741 kubeadm.go:740] duration metric: took 5.389006ms waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710521   77741 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:36.715511   77741 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:39.144299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.144365   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:38.723972   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.221423   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:43.222431   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.627428   77373 start.go:364] duration metric: took 54.46189221s to acquireMachinesLock for "no-preload-390487"
	I1011 22:24:44.627494   77373 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:44.627505   77373 fix.go:54] fixHost starting: 
	I1011 22:24:44.627904   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:44.627943   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:44.647097   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I1011 22:24:44.647594   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:44.648124   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:24:44.648149   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:44.648538   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:44.648719   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:24:44.648881   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:24:44.650660   77373 fix.go:112] recreateIfNeeded on no-preload-390487: state=Stopped err=<nil>
	I1011 22:24:44.650685   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	W1011 22:24:44.650829   77373 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:44.652887   77373 out.go:177] * Restarting existing kvm2 VM for "no-preload-390487" ...
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:43.144430   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:45.645398   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.654550   77373 main.go:141] libmachine: (no-preload-390487) Calling .Start
	I1011 22:24:44.654840   77373 main.go:141] libmachine: (no-preload-390487) Ensuring networks are active...
	I1011 22:24:44.655546   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network default is active
	I1011 22:24:44.656008   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network mk-no-preload-390487 is active
	I1011 22:24:44.656383   77373 main.go:141] libmachine: (no-preload-390487) Getting domain xml...
	I1011 22:24:44.657065   77373 main.go:141] libmachine: (no-preload-390487) Creating domain...
	I1011 22:24:45.980644   77373 main.go:141] libmachine: (no-preload-390487) Waiting to get IP...
	I1011 22:24:45.981635   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:45.982101   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:45.982167   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:45.982078   79243 retry.go:31] will retry after 195.443447ms: waiting for machine to come up
	I1011 22:24:46.179539   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.179999   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.180030   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.179953   79243 retry.go:31] will retry after 322.117828ms: waiting for machine to come up
	I1011 22:24:46.503434   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.503947   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.503969   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.503915   79243 retry.go:31] will retry after 295.160677ms: waiting for machine to come up
	I1011 22:24:46.801184   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.801763   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.801797   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.801716   79243 retry.go:31] will retry after 396.903731ms: waiting for machine to come up
	I1011 22:24:47.200047   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.200515   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.200543   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.200480   79243 retry.go:31] will retry after 750.816077ms: waiting for machine to come up
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:44.735539   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:44.735561   77741 pod_ready.go:82] duration metric: took 8.020026677s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:44.735576   77741 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:46.744354   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:48.144609   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:50.149053   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:47.952867   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.953464   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.953495   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.953288   79243 retry.go:31] will retry after 639.218351ms: waiting for machine to come up
	I1011 22:24:48.594034   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:48.594428   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:48.594484   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:48.594409   79243 retry.go:31] will retry after 884.81772ms: waiting for machine to come up
	I1011 22:24:49.480960   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:49.481335   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:49.481362   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:49.481290   79243 retry.go:31] will retry after 1.298501886s: waiting for machine to come up
	I1011 22:24:50.781446   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:50.781854   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:50.781878   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:50.781800   79243 retry.go:31] will retry after 1.856156849s: waiting for machine to come up
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:49.242663   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:51.875476   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:53.411915   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.411937   77741 pod_ready.go:82] duration metric: took 8.676353233s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.411950   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418808   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.418827   77741 pod_ready.go:82] duration metric: took 6.869777ms for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418838   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428224   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.428257   77741 pod_ready.go:82] duration metric: took 9.411307ms for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428270   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438263   77741 pod_ready.go:93] pod "kube-proxy-hsjth" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.438293   77741 pod_ready.go:82] duration metric: took 10.015779ms for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438307   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444909   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.444932   77741 pod_ready.go:82] duration metric: took 6.618233ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444943   77741 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:52.646299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:55.144236   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:52.640024   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:52.640568   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:52.640600   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:52.640516   79243 retry.go:31] will retry after 1.634063154s: waiting for machine to come up
	I1011 22:24:54.275779   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:54.276278   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:54.276307   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:54.276222   79243 retry.go:31] will retry after 2.141763066s: waiting for machine to come up
	I1011 22:24:56.419913   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:56.420312   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:56.420333   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:56.420279   79243 retry.go:31] will retry after 3.322852036s: waiting for machine to come up
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.451196   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.452254   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.645338   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:00.142994   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.147083   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:59.745010   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:59.745433   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:59.745457   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:59.745377   79243 retry.go:31] will retry after 4.379442156s: waiting for machine to come up
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.953903   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.451156   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:04.127900   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has current primary IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128566   77373 main.go:141] libmachine: (no-preload-390487) Found IP for machine: 192.168.61.55
	I1011 22:25:04.128581   77373 main.go:141] libmachine: (no-preload-390487) Reserving static IP address...
	I1011 22:25:04.129112   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.129144   77373 main.go:141] libmachine: (no-preload-390487) DBG | skip adding static IP to network mk-no-preload-390487 - found existing host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"}
	I1011 22:25:04.129157   77373 main.go:141] libmachine: (no-preload-390487) Reserved static IP address: 192.168.61.55
	I1011 22:25:04.129170   77373 main.go:141] libmachine: (no-preload-390487) Waiting for SSH to be available...
	I1011 22:25:04.129179   77373 main.go:141] libmachine: (no-preload-390487) DBG | Getting to WaitForSSH function...
	I1011 22:25:04.131402   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131668   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.131698   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131864   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH client type: external
	I1011 22:25:04.131892   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa (-rw-------)
	I1011 22:25:04.131922   77373 main.go:141] libmachine: (no-preload-390487) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:25:04.131936   77373 main.go:141] libmachine: (no-preload-390487) DBG | About to run SSH command:
	I1011 22:25:04.131950   77373 main.go:141] libmachine: (no-preload-390487) DBG | exit 0
	I1011 22:25:04.258578   77373 main.go:141] libmachine: (no-preload-390487) DBG | SSH cmd err, output: <nil>: 
	I1011 22:25:04.258971   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetConfigRaw
	I1011 22:25:04.259663   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.262128   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262510   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.262542   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262838   77373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:25:04.263066   77373 machine.go:93] provisionDockerMachine start ...
	I1011 22:25:04.263088   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:04.263316   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.265560   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.265843   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.265862   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.266086   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.266277   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266448   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266597   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.266755   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.266968   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.266982   77373 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:25:04.375270   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:25:04.375306   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375541   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:25:04.375564   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375718   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.378706   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379069   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.379091   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379315   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.379515   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.380026   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.380213   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.380224   77373 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-390487 && echo "no-preload-390487" | sudo tee /etc/hostname
	I1011 22:25:04.503359   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-390487
	
	I1011 22:25:04.503392   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.506163   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506502   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.506537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506742   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.506924   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507077   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507332   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.507483   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.507660   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.507676   77373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-390487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-390487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-390487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:25:04.624804   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:25:04.624850   77373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:25:04.624880   77373 buildroot.go:174] setting up certificates
	I1011 22:25:04.624893   77373 provision.go:84] configureAuth start
	I1011 22:25:04.624909   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.625190   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.627950   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628278   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.628320   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628458   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.630686   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631012   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.631040   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631168   77373 provision.go:143] copyHostCerts
	I1011 22:25:04.631234   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:25:04.631255   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:25:04.631328   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:25:04.631438   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:25:04.631450   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:25:04.631488   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:25:04.631564   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:25:04.631575   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:25:04.631600   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:25:04.631668   77373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.no-preload-390487 san=[127.0.0.1 192.168.61.55 localhost minikube no-preload-390487]
	I1011 22:25:04.736741   77373 provision.go:177] copyRemoteCerts
	I1011 22:25:04.736802   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:25:04.736830   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.739358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739665   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.739695   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.740016   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.740156   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.740291   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:04.826024   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:25:04.851100   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:25:04.875010   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:25:04.899107   77373 provision.go:87] duration metric: took 274.198948ms to configureAuth
	I1011 22:25:04.899133   77373 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:25:04.899323   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:25:04.899405   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.901744   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902079   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.902108   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902320   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.902518   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902689   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902911   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.903095   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.903284   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.903304   77373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:25:05.129377   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:25:05.129406   77373 machine.go:96] duration metric: took 866.326736ms to provisionDockerMachine
	I1011 22:25:05.129420   77373 start.go:293] postStartSetup for "no-preload-390487" (driver="kvm2")
	I1011 22:25:05.129435   77373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:25:05.129455   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.129768   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:25:05.129798   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.132216   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132539   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.132579   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132703   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.132891   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.133039   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.133177   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.219144   77373 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:25:05.223510   77373 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:25:05.223549   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:25:05.223634   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:25:05.223728   77373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:25:05.223837   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:25:05.234069   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:05.259266   77373 start.go:296] duration metric: took 129.829951ms for postStartSetup
	I1011 22:25:05.259313   77373 fix.go:56] duration metric: took 20.631808044s for fixHost
	I1011 22:25:05.259335   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.262071   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262313   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.262340   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262493   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.262702   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.262899   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.263030   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.263243   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:05.263425   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:05.263470   77373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:25:05.367341   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685505.320713090
	
	I1011 22:25:05.367368   77373 fix.go:216] guest clock: 1728685505.320713090
	I1011 22:25:05.367378   77373 fix.go:229] Guest: 2024-10-11 22:25:05.32071309 +0000 UTC Remote: 2024-10-11 22:25:05.259318089 +0000 UTC m=+357.684959787 (delta=61.395001ms)
	I1011 22:25:05.367397   77373 fix.go:200] guest clock delta is within tolerance: 61.395001ms
	I1011 22:25:05.367409   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 20.739943225s
	I1011 22:25:05.367428   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.367673   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:05.370276   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370611   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.370648   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370815   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371423   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371608   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371674   77373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:25:05.371726   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.371914   77373 ssh_runner.go:195] Run: cat /version.json
	I1011 22:25:05.371939   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.374358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374730   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.374764   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374794   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374915   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375073   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375227   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375232   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.375256   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.375342   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.375449   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375560   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375714   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375819   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.482886   77373 ssh_runner.go:195] Run: systemctl --version
	I1011 22:25:05.489351   77373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:25:05.643786   77373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:25:05.650229   77373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:25:05.650296   77373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:25:05.666494   77373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:25:05.666522   77373 start.go:495] detecting cgroup driver to use...
	I1011 22:25:05.666582   77373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:25:05.683659   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:25:05.697066   77373 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:25:05.697119   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:25:05.712780   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:25:05.728824   77373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:25:05.844693   77373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:25:06.021006   77373 docker.go:233] disabling docker service ...
	I1011 22:25:06.021064   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:25:06.035844   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:25:06.049585   77373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:25:06.194294   77373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:25:06.333778   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:25:06.349522   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:25:06.370214   77373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:25:06.370285   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.380680   77373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:25:06.380751   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.390974   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.402539   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.414129   77373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:25:06.425521   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.435647   77373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.453454   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.463564   77373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:25:06.473487   77373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:25:06.473560   77373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:25:06.487972   77373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:25:06.498579   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:06.626975   77373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:25:06.736608   77373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:25:06.736681   77373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:25:06.742858   77373 start.go:563] Will wait 60s for crictl version
	I1011 22:25:06.742916   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:06.746699   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:25:06.785073   77373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:25:06.785172   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.812373   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.842453   77373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:25:04.645257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.143877   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.843849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:06.846526   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.846822   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:06.846870   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.847073   77373 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:25:06.851361   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:06.864316   77373 kubeadm.go:883] updating cluster {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:25:06.864426   77373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:25:06.864455   77373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:25:06.904225   77373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:25:06.904253   77373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:25:06.904307   77373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:06.904342   77373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.904360   77373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.904376   77373 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.904363   77373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.904475   77373 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.904499   77373 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 22:25:06.904480   77373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.905783   77373 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.905694   77373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.905680   77373 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.905686   77373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:07.057329   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.060095   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.080674   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1011 22:25:07.081598   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.085905   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.097740   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.106415   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.136780   77373 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1011 22:25:07.136834   77373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.136888   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.152692   77373 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1011 22:25:07.152730   77373 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.152784   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341838   77373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1011 22:25:07.341882   77373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.341890   77373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1011 22:25:07.341916   77373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.341929   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341947   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341973   77373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1011 22:25:07.341998   77373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1011 22:25:07.342007   77373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.342041   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.342014   77373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.342058   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.342053   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.342099   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.355230   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.355409   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.439441   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.439572   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.444043   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.444071   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.578269   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.578424   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.580474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.580516   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.580535   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.580606   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.451555   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.951138   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:09.144842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:11.643405   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.697848   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 22:25:07.697957   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.697984   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.722151   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1011 22:25:07.722269   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:07.734336   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 22:25:07.734449   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:07.734475   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.734489   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1011 22:25:07.734500   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 22:25:07.734508   77373 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734541   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734578   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:07.788345   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 22:25:07.788371   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1011 22:25:07.788446   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:07.816070   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1011 22:25:07.816308   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 22:25:07.816394   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:08.066781   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.943666   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.209065908s)
	I1011 22:25:09.943709   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1011 22:25:09.943750   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.20918304s)
	I1011 22:25:09.943771   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1011 22:25:09.943779   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.155317638s)
	I1011 22:25:09.943793   77373 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943796   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1011 22:25:09.943829   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.127421611s)
	I1011 22:25:09.943841   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943848   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1011 22:25:09.943878   77373 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.877065002s)
	I1011 22:25:09.943925   77373 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 22:25:09.943968   77373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.944013   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.951973   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:10.953032   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.644601   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.645917   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.641438   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.697578704s)
	I1011 22:25:13.641519   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1011 22:25:13.641523   77373 ssh_runner.go:235] Completed: which crictl: (3.697489585s)
	I1011 22:25:13.641556   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641597   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641598   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534810   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.893187916s)
	I1011 22:25:15.534865   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1011 22:25:15.534893   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.893219513s)
	I1011 22:25:15.534963   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534898   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:15.535027   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.452229   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.951490   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.952280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:18.143828   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:20.144712   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.707389   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.172401078s)
	I1011 22:25:17.707420   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.172369128s)
	I1011 22:25:17.707443   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1011 22:25:17.707474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:17.707476   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:17.707644   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:19.168147   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460475389s)
	I1011 22:25:19.168190   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1011 22:25:19.168156   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.460655676s)
	I1011 22:25:19.168221   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168242   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 22:25:19.168276   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168336   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.123906   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.955605804s)
	I1011 22:25:21.123945   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1011 22:25:21.123991   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.955631476s)
	I1011 22:25:21.124019   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1011 22:25:21.124030   77373 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.124068   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.773002   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 22:25:21.773050   77373 cache_images.go:123] Successfully loaded all cached images
	I1011 22:25:21.773057   77373 cache_images.go:92] duration metric: took 14.868794284s to LoadCachedImages
	I1011 22:25:21.773074   77373 kubeadm.go:934] updating node { 192.168.61.55 8443 v1.31.1 crio true true} ...
	I1011 22:25:21.773185   77373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-390487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:25:21.773265   77373 ssh_runner.go:195] Run: crio config
	I1011 22:25:21.821268   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:21.821291   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:21.821301   77373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:25:21.821321   77373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-390487 NodeName:no-preload-390487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:25:21.821490   77373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-390487"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:25:21.821564   77373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:25:21.832830   77373 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:25:21.832905   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:25:21.842726   77373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1011 22:25:21.859739   77373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:25:21.876192   77373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1011 22:25:21.893366   77373 ssh_runner.go:195] Run: grep 192.168.61.55	control-plane.minikube.internal$ /etc/hosts
	I1011 22:25:21.897435   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:21.909840   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:22.021697   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:25:22.039163   77373 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487 for IP: 192.168.61.55
	I1011 22:25:22.039187   77373 certs.go:194] generating shared ca certs ...
	I1011 22:25:22.039207   77373 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:25:22.039385   77373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:25:22.039449   77373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:25:22.039462   77373 certs.go:256] generating profile certs ...
	I1011 22:25:22.039587   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/client.key
	I1011 22:25:22.039668   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key.6a466d38
	I1011 22:25:22.039713   77373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key
	I1011 22:25:22.039858   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:25:22.039901   77373 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:25:22.039912   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:25:22.039959   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:25:22.040001   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:25:22.040029   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:25:22.040089   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:22.040914   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:25:22.077604   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:25:22.133879   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:25:22.164886   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:25:22.197655   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:25:22.229594   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:25:22.264506   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:25:22.287571   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:25:22.310555   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:25:22.333333   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:25:22.356094   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:25:22.380156   77373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:25:22.398056   77373 ssh_runner.go:195] Run: openssl version
	I1011 22:25:22.403799   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:25:22.415645   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420352   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420411   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.426457   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:25:22.438182   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:25:22.449704   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454778   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454840   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.460601   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:25:22.472587   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:25:22.485096   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489673   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489729   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.495547   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:25:22.507652   77373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:25:22.513081   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:25:22.519287   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:25:22.525159   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:25:22.531170   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:25:22.537321   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:25:22.543093   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:25:22.548832   77373 kubeadm.go:392] StartCluster: {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:25:22.548926   77373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:25:22.548972   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.594269   77373 cri.go:89] found id: ""
	I1011 22:25:22.594341   77373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:25:22.604950   77373 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:25:22.604976   77373 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:25:22.605025   77373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.452376   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.950987   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.644866   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:25.143773   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.144243   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.615035   77373 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:25:22.615951   77373 kubeconfig.go:125] found "no-preload-390487" server: "https://192.168.61.55:8443"
	I1011 22:25:22.618000   77373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:25:22.628327   77373 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.55
	I1011 22:25:22.628367   77373 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:25:22.628379   77373 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:25:22.628426   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.681709   77373 cri.go:89] found id: ""
	I1011 22:25:22.681769   77373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:25:22.697989   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:25:22.707772   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:25:22.707792   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:25:22.707838   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:25:22.716928   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:25:22.716984   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:25:22.726327   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:25:22.735769   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:25:22.735819   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:25:22.745468   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.754493   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:25:22.754552   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.764062   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:25:22.773234   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:25:22.773298   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:25:22.782913   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:25:22.792119   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:22.910184   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:23.868070   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.095326   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.164924   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.251769   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:25:24.251852   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.752110   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.252591   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.278468   77373 api_server.go:72] duration metric: took 1.026698113s to wait for apiserver process to appear ...
	I1011 22:25:25.278498   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:25:25.278521   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:25.278974   77373 api_server.go:269] stopped: https://192.168.61.55:8443/healthz: Get "https://192.168.61.55:8443/healthz": dial tcp 192.168.61.55:8443: connect: connection refused
	I1011 22:25:25.778778   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.951896   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.451534   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.012373   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.012412   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.012437   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.099444   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.099503   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.278723   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.284616   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.284647   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:28.779287   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.786100   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.786125   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:29.278680   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:29.285168   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:25:29.291497   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:25:29.291526   77373 api_server.go:131] duration metric: took 4.013020818s to wait for apiserver health ...
	I1011 22:25:29.291537   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:29.291545   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:29.293325   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:25:29.644410   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:32.144466   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:29.294582   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:25:29.306107   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:25:29.331655   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:25:29.346931   77373 system_pods.go:59] 8 kube-system pods found
	I1011 22:25:29.346973   77373 system_pods.go:61] "coredns-7c65d6cfc9-5z4p5" [a369ddfd-01d5-4d2a-a63b-ab36b26f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:25:29.346986   77373 system_pods.go:61] "etcd-no-preload-390487" [b9aa7965-9be2-43b4-a291-246e5f27fa00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:25:29.346998   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [17e9a39a-2084-4504-8f9c-602cad87536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:25:29.347004   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [c4dc9017-6062-444e-b11f-23762dc5ef3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:25:29.347010   77373 system_pods.go:61] "kube-proxy-82p2c" [555091e0-b40d-49a6-a964-80baf143c001] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:25:29.347029   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [dcfc8186-23f5-4744-93f8-080180f93be6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:25:29.347034   77373 system_pods.go:61] "metrics-server-6867b74b74-tk8fq" [8fb649e0-2af0-4655-8251-356873e2213e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:25:29.347041   77373 system_pods.go:61] "storage-provisioner" [a01f8ac1-6d29-4885-86a7-c7ef0c289b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:25:29.347047   77373 system_pods.go:74] duration metric: took 15.369022ms to wait for pod list to return data ...
	I1011 22:25:29.347055   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:25:29.352543   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:25:29.352576   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:25:29.352590   77373 node_conditions.go:105] duration metric: took 5.52943ms to run NodePressure ...
	I1011 22:25:29.352613   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:29.648681   77373 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652653   77373 kubeadm.go:739] kubelet initialised
	I1011 22:25:29.652671   77373 kubeadm.go:740] duration metric: took 3.972281ms waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652679   77373 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:25:29.658454   77373 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.663740   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663768   77373 pod_ready.go:82] duration metric: took 5.289381ms for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.663780   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663791   77373 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.668667   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668693   77373 pod_ready.go:82] duration metric: took 4.892171ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.668704   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668714   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.673134   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673157   77373 pod_ready.go:82] duration metric: took 4.432292ms for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.673168   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673177   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.734940   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734965   77373 pod_ready.go:82] duration metric: took 61.774649ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.734974   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734980   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134816   77373 pod_ready.go:93] pod "kube-proxy-82p2c" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:30.134843   77373 pod_ready.go:82] duration metric: took 399.851043ms for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134856   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:32.143137   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.451926   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:31.452961   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.145457   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.643721   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.143610   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.641435   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.951339   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:35.952408   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.450537   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.644336   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.144815   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.642041   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.143153   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.641922   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:41.641949   77373 pod_ready.go:82] duration metric: took 11.507084936s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:41.641962   77373 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.451326   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:42.451670   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.643191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.643486   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.648037   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.648090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.451907   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:46.950763   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.144086   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.144203   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.144498   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:47.649490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.148831   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.148997   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.451637   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:51.952434   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.643929   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.645968   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.149692   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.649774   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:54.456563   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.952097   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.143734   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.146350   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.148063   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.148608   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:59.451436   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.452136   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.643807   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.644664   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.149089   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.152410   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:26:03.952386   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.952562   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.645291   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.145051   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.146419   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.650259   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.149242   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.149684   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.452508   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.952101   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.953125   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.645067   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.646842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.149723   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.649874   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:15.451781   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:17.951419   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.144547   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.145544   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.151415   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.649239   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:20.450507   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.450680   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:23.643586   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.144617   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:24.149159   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.649041   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:24.451584   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.451685   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.643757   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.143786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.650003   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.148367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:28.952410   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.450990   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.644354   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.648376   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:33.951428   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.951901   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.143140   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.144224   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.649082   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:39.650580   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.148945   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.451431   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.951642   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.951753   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.644548   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.144055   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.649705   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.148731   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.952282   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.451649   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.643384   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:52.143369   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.150143   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.648664   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:49.951912   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.955275   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.144438   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.145153   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.149232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.648245   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.451964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.952084   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:58.643527   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:00.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.148916   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.452217   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.951461   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:03.143621   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:05.644225   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:04.148533   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.149378   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:03.952598   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.451802   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:07.645531   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.144141   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.144739   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:08.648935   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.649185   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:08.951257   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.951915   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:13.451012   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:14.643844   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:16.643951   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.651766   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.148176   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.148231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:15.452119   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.951679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.144439   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.644786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.647581   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.649450   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:19.952158   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.952425   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.143206   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.143587   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.148823   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:27:24.452366   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.950804   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:28.143916   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:30.644830   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:29.149277   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.648385   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:28.952135   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.452143   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.143604   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:35.149383   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.649481   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.148545   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:33.951885   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.450920   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:37.643707   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.644162   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.143479   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:38.649090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:41.148009   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:38.451524   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:40.954861   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:44.143555   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:46.143976   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:43.149295   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.648165   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.451177   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.451493   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.951335   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.145191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.643798   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.648963   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.150518   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.951782   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.451312   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:53.143194   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.143896   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.144275   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.647882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:54.648946   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:56.649832   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:54.951866   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.450974   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.144335   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.144508   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.148539   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.652535   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:59.452019   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.951319   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:03.643904   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.142780   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.149856   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:03.951539   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:05.955152   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.450679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.143240   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.144659   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.649122   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:11.148403   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.451221   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.952631   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.644135   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.143192   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.144402   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.148449   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.648534   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:15.450809   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.451351   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.644591   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.144568   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:18.147818   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:20.150891   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.951122   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:21.951323   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.643512   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:27.143639   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.648755   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.648851   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:24.450975   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.451800   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:29.147583   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.644088   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:32.137388   77526 pod_ready.go:82] duration metric: took 4m0.000065444s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:32.137437   77526 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:32.137454   77526 pod_ready.go:39] duration metric: took 4m13.67950194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:32.137478   77526 kubeadm.go:597] duration metric: took 4m21.517496572s to restartPrimaryControlPlane
	W1011 22:28:32.137532   77526 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:32.137562   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:29.150291   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.649055   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:28.951749   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:30.952578   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.149312   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:36.648952   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:33.451778   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:35.951964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:39.148016   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:41.149360   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.454387   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:40.952061   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:43.649642   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:46.148528   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:43.451280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:45.952227   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.451044   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.149006   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.649552   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.451478   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:52.951299   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:53.445808   77741 pod_ready.go:82] duration metric: took 4m0.000846456s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:53.445846   77741 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:53.445869   77741 pod_ready.go:39] duration metric: took 4m16.735338637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:53.445899   77741 kubeadm.go:597] duration metric: took 4m23.626843864s to restartPrimaryControlPlane
	W1011 22:28:53.445964   77741 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:53.445996   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:53.148309   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:55.149231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.243748   77526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.106148594s)
	I1011 22:28:58.243837   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:58.263915   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:58.281349   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:58.297636   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:58.297661   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:58.297710   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:58.311371   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:58.311444   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:58.330584   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:58.350348   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:58.350403   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:58.376417   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.390350   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:58.390399   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.404955   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:58.416263   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:58.416322   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:58.425942   77526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:58.478782   77526 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:28:58.478835   77526 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:58.590185   77526 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:58.590333   77526 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:58.590451   77526 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:28:58.598371   77526 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:58.600253   77526 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:58.600357   77526 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:58.600458   77526 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:58.600569   77526 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:58.600657   77526 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:58.600761   77526 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:58.600827   77526 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:58.600913   77526 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:58.601018   77526 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:58.601122   77526 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:58.601250   77526 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:58.601335   77526 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:58.601417   77526 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:58.951248   77526 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:59.187453   77526 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:28:59.496055   77526 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:59.583363   77526 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:59.747699   77526 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:59.748339   77526 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:59.750963   77526 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:59.752710   77526 out.go:235]   - Booting up control plane ...
	I1011 22:28:59.752858   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:59.752956   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:59.753174   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:59.770682   77526 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:59.776919   77526 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:59.776989   77526 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:59.900964   77526 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:28:59.901122   77526 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:00.402400   77526 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.862362ms
	I1011 22:29:00.402529   77526 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:28:57.648367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:00.148371   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:02.153536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:05.905921   77526 kubeadm.go:310] [api-check] The API server is healthy after 5.501955207s
	I1011 22:29:05.918054   77526 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:05.936720   77526 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:05.982293   77526 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:05.982571   77526 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-223942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:06.007168   77526 kubeadm.go:310] [bootstrap-token] Using token: a4lu2p.4yfrrazoy97j5yu0
	I1011 22:29:06.008642   77526 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:06.008749   77526 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:06.020393   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:06.032191   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:06.039269   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:06.043990   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:06.053648   77526 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:06.312388   77526 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:06.740160   77526 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:07.315305   77526 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:07.317697   77526 kubeadm.go:310] 
	I1011 22:29:07.317793   77526 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:07.317806   77526 kubeadm.go:310] 
	I1011 22:29:07.317929   77526 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:07.317950   77526 kubeadm.go:310] 
	I1011 22:29:07.318009   77526 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:07.318126   77526 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:07.318222   77526 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:07.318232   77526 kubeadm.go:310] 
	I1011 22:29:07.318281   77526 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:07.318289   77526 kubeadm.go:310] 
	I1011 22:29:07.318339   77526 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:07.318350   77526 kubeadm.go:310] 
	I1011 22:29:07.318424   77526 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:07.318528   77526 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:07.318630   77526 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:07.318644   77526 kubeadm.go:310] 
	I1011 22:29:07.318750   77526 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:07.318823   77526 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:07.318830   77526 kubeadm.go:310] 
	I1011 22:29:07.318913   77526 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319086   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:07.319124   77526 kubeadm.go:310] 	--control-plane 
	I1011 22:29:07.319133   77526 kubeadm.go:310] 
	I1011 22:29:07.319256   77526 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:07.319264   77526 kubeadm.go:310] 
	I1011 22:29:07.319366   77526 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319505   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:07.321368   77526 kubeadm.go:310] W1011 22:28:58.449635    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321691   77526 kubeadm.go:310] W1011 22:28:58.450407    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321866   77526 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:07.321888   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:29:07.321899   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:07.323580   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:07.324762   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:07.335614   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:04.648441   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:06.648506   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:07.354851   77526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:07.355473   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:07.355479   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-223942 minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=embed-certs-223942 minikube.k8s.io/primary=true
	I1011 22:29:07.397703   77526 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:07.581167   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.081395   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.582200   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.081862   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.581361   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.081246   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.581754   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.081988   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.179021   77526 kubeadm.go:1113] duration metric: took 3.82416989s to wait for elevateKubeSystemPrivileges
	I1011 22:29:11.179061   77526 kubeadm.go:394] duration metric: took 5m0.606049956s to StartCluster
	I1011 22:29:11.179086   77526 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.179171   77526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:11.181572   77526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.181873   77526 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:11.181938   77526 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:11.182035   77526 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223942"
	I1011 22:29:11.182059   77526 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223942"
	I1011 22:29:11.182060   77526 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223942"
	W1011 22:29:11.182070   77526 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:11.182078   77526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223942"
	I1011 22:29:11.182102   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182114   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:11.182091   77526 addons.go:69] Setting metrics-server=true in profile "embed-certs-223942"
	I1011 22:29:11.182147   77526 addons.go:234] Setting addon metrics-server=true in "embed-certs-223942"
	W1011 22:29:11.182161   77526 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:11.182196   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182515   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182558   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182579   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182692   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.183573   77526 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:11.184930   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:11.198456   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1011 22:29:11.198666   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I1011 22:29:11.199044   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199141   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199592   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199607   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199726   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199744   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199950   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200104   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200248   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.200557   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.200608   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.201637   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1011 22:29:11.202066   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.202541   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.202560   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.202894   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.203434   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.203474   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.204227   77526 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223942"
	W1011 22:29:11.204249   77526 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:11.204281   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.204663   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.204707   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.218765   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I1011 22:29:11.218894   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I1011 22:29:11.219238   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219244   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219747   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219772   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.219949   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219970   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.220019   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220167   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220232   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220785   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220847   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1011 22:29:11.221152   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.221591   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.221614   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.222116   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.222135   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222401   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222916   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.222955   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.224006   77526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:11.224007   77526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:11.225424   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:11.225455   77526 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:11.225474   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.226095   77526 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.226115   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:11.226131   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.228914   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229448   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.229472   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229542   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229584   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.229744   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230021   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.230025   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230037   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.230118   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.230496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.230648   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230781   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230897   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.238742   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1011 22:29:11.239211   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.239762   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.239786   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.240061   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.240238   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.241740   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.241967   77526 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.241986   77526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:11.242007   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.244886   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245237   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.245260   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245501   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.245684   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.245882   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.246052   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.365926   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:11.391766   77526 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401923   77526 node_ready.go:49] node "embed-certs-223942" has status "Ready":"True"
	I1011 22:29:11.401943   77526 node_ready.go:38] duration metric: took 10.139287ms for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401952   77526 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:11.406561   77526 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:11.460959   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:11.460992   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:11.475600   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.481436   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:11.481465   77526 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:11.515478   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.515500   77526 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:11.558164   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.569398   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.795782   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.795805   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796093   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:11.796119   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796137   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.796152   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.796163   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796373   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796389   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809155   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.809176   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.809439   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.809457   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809463   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475441   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475469   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.475720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475769   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.475789   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.475805   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475815   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.476016   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.476027   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.476031   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.476041   77526 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223942"
	I1011 22:29:12.503190   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503219   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503530   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503574   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.503588   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503598   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503834   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503850   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.505379   77526 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1011 22:29:09.149809   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:11.650232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:12.506382   77526 addons.go:510] duration metric: took 1.324453305s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1011 22:29:13.412840   77526 pod_ready.go:103] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:13.918905   77526 pod_ready.go:93] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:13.918926   77526 pod_ready.go:82] duration metric: took 2.512345346s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:13.918936   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:15.925307   77526 pod_ready.go:103] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:14.149051   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:16.649622   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:17.925327   77526 pod_ready.go:93] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.925353   77526 pod_ready.go:82] duration metric: took 4.006410198s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.925366   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929846   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.929872   77526 pod_ready.go:82] duration metric: took 4.495642ms for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929883   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933635   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.933652   77526 pod_ready.go:82] duration metric: took 3.761139ms for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933661   77526 pod_ready.go:39] duration metric: took 6.531698315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:17.933677   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:17.933732   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:17.950153   77526 api_server.go:72] duration metric: took 6.768243331s to wait for apiserver process to appear ...
	I1011 22:29:17.950174   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:17.950192   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:29:17.953743   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:29:17.954586   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:17.954610   77526 api_server.go:131] duration metric: took 4.428307ms to wait for apiserver health ...
	I1011 22:29:17.954629   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:17.959411   77526 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:17.959432   77526 system_pods.go:61] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.959438   77526 system_pods.go:61] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.959443   77526 system_pods.go:61] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.959447   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.959451   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.959454   77526 system_pods.go:61] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.959457   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.959468   77526 system_pods.go:61] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.959473   77526 system_pods.go:61] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.959480   77526 system_pods.go:74] duration metric: took 4.84106ms to wait for pod list to return data ...
	I1011 22:29:17.959488   77526 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:17.962273   77526 default_sa.go:45] found service account: "default"
	I1011 22:29:17.962294   77526 default_sa.go:55] duration metric: took 2.80012ms for default service account to be created ...
	I1011 22:29:17.962302   77526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:17.966653   77526 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:17.966675   77526 system_pods.go:89] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.966681   77526 system_pods.go:89] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.966686   77526 system_pods.go:89] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.966691   77526 system_pods.go:89] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.966695   77526 system_pods.go:89] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.966698   77526 system_pods.go:89] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.966702   77526 system_pods.go:89] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.966741   77526 system_pods.go:89] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.966751   77526 system_pods.go:89] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.966759   77526 system_pods.go:126] duration metric: took 4.452902ms to wait for k8s-apps to be running ...
	I1011 22:29:17.966766   77526 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:17.966807   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:17.982751   77526 system_svc.go:56] duration metric: took 15.979158ms WaitForService to wait for kubelet
	I1011 22:29:17.982770   77526 kubeadm.go:582] duration metric: took 6.800865436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:17.982788   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:17.985340   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:17.985361   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:17.985373   77526 node_conditions.go:105] duration metric: took 2.578879ms to run NodePressure ...
	I1011 22:29:17.985385   77526 start.go:241] waiting for startup goroutines ...
	I1011 22:29:17.985398   77526 start.go:246] waiting for cluster config update ...
	I1011 22:29:17.985415   77526 start.go:255] writing updated cluster config ...
	I1011 22:29:17.985668   77526 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:18.034091   77526 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:18.036159   77526 out.go:177] * Done! kubectl is now configured to use "embed-certs-223942" cluster and "default" namespace by default
	I1011 22:29:19.671974   77741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225955809s)
	I1011 22:29:19.672048   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:19.689229   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:29:19.701141   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:29:19.714596   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:29:19.714630   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:29:19.714674   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:29:19.729207   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:29:19.729273   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:29:19.739052   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:29:19.748101   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:29:19.748162   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:29:19.757518   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.766689   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:29:19.766754   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.776197   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:29:19.785329   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:29:19.785381   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:29:19.794742   77741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:29:19.837814   77741 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:29:19.837936   77741 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:29:19.956401   77741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:29:19.956502   77741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:29:19.956574   77741 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:29:19.965603   77741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:29:19.967637   77741 out.go:235]   - Generating certificates and keys ...
	I1011 22:29:19.967726   77741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:29:19.967793   77741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:29:19.967875   77741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:29:19.967965   77741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:29:19.968066   77741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:29:19.968139   77741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:29:19.968224   77741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:29:19.968319   77741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:29:19.968435   77741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:29:19.968545   77741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:29:19.968608   77741 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:29:19.968701   77741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:29:20.266256   77741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:29:20.353124   77741 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:29:20.693912   77741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:29:20.814227   77741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:29:21.028714   77741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:29:21.029382   77741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:29:21.032065   77741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:29:19.149346   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.648583   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.033900   77741 out.go:235]   - Booting up control plane ...
	I1011 22:29:21.034020   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:29:21.034134   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:29:21.034236   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:29:21.053259   77741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:29:21.060157   77741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:29:21.060229   77741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:29:21.190140   77741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:29:21.190325   77741 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:21.691954   77741 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78398ms
	I1011 22:29:21.692069   77741 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:29:26.696518   77741 kubeadm.go:310] [api-check] The API server is healthy after 5.002229227s
	I1011 22:29:26.710581   77741 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:26.726686   77741 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:26.759596   77741 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:26.759894   77741 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-070708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:26.769529   77741 kubeadm.go:310] [bootstrap-token] Using token: dhosfn.441jcramrxgiydi4
	I1011 22:29:24.149380   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.647490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.770660   77741 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:26.770801   77741 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:26.775859   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:26.783572   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:26.789736   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:26.793026   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:26.797814   77741 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:27.102055   77741 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:27.537636   77741 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:28.102099   77741 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:28.103130   77741 kubeadm.go:310] 
	I1011 22:29:28.103241   77741 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:28.103264   77741 kubeadm.go:310] 
	I1011 22:29:28.103371   77741 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:28.103379   77741 kubeadm.go:310] 
	I1011 22:29:28.103400   77741 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:28.103454   77741 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:28.103506   77741 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:28.103510   77741 kubeadm.go:310] 
	I1011 22:29:28.103565   77741 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:28.103569   77741 kubeadm.go:310] 
	I1011 22:29:28.103618   77741 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:28.103624   77741 kubeadm.go:310] 
	I1011 22:29:28.103666   77741 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:28.103778   77741 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:28.103874   77741 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:28.103882   77741 kubeadm.go:310] 
	I1011 22:29:28.103960   77741 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:28.104023   77741 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:28.104029   77741 kubeadm.go:310] 
	I1011 22:29:28.104096   77741 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104179   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:28.104199   77741 kubeadm.go:310] 	--control-plane 
	I1011 22:29:28.104205   77741 kubeadm.go:310] 
	I1011 22:29:28.104271   77741 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:28.104277   77741 kubeadm.go:310] 
	I1011 22:29:28.104384   77741 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104513   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:28.105322   77741 kubeadm.go:310] W1011 22:29:19.811300    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105623   77741 kubeadm.go:310] W1011 22:29:19.812133    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105772   77741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:28.105796   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:29:28.105808   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:28.107671   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:28.108911   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:28.121190   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:28.143442   77741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:28.143523   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.143537   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-070708 minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=default-k8s-diff-port-070708 minikube.k8s.io/primary=true
	I1011 22:29:28.380171   77741 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:28.380244   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.649448   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:31.147882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:28.880541   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.380686   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.880953   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.381236   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.880946   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.380516   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.880841   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.380874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.880874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.969809   77741 kubeadm.go:1113] duration metric: took 4.826361525s to wait for elevateKubeSystemPrivileges
	I1011 22:29:32.969844   77741 kubeadm.go:394] duration metric: took 5m3.206576288s to StartCluster
	I1011 22:29:32.969864   77741 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.969949   77741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:32.972053   77741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.972321   77741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:32.972419   77741 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:32.972545   77741 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972564   77741 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972572   77741 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:32.972580   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:32.972577   77741 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972601   77741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-070708"
	I1011 22:29:32.972590   77741 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972621   77741 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972631   77741 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:32.972676   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972605   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972952   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.972982   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973051   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973088   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973111   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973143   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973995   77741 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:32.975387   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:32.989010   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1011 22:29:32.989449   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.989866   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I1011 22:29:32.990100   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990127   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.990213   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.990478   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.990668   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990692   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.991068   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991071   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.991110   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1011 22:29:32.991671   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991703   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991966   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.992453   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.992486   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.992808   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.992950   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:32.995986   77741 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.996004   77741 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:32.996031   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.996271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.996311   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.010650   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I1011 22:29:33.010949   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1011 22:29:33.011111   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011350   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I1011 22:29:33.011490   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.011509   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.011838   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011936   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012113   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.012272   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012283   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.012338   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.012663   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012877   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012897   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.013271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:33.013307   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.013511   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.013691   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.014538   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.015400   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.016387   77741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:33.017187   77741 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:33.018090   77741 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.018111   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:33.018130   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.018972   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:33.018994   77741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:33.019015   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.021827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022205   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.022226   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.022513   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.022704   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.022865   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.023070   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023552   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.023574   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.024067   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.024222   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.024376   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.030089   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I1011 22:29:33.030477   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.030929   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.030954   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.031352   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.031571   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.033098   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.033335   77741 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.033351   77741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:33.033366   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.036390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.036758   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.036780   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.037025   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.037173   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.037322   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.037467   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.201955   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:33.220870   77741 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229595   77741 node_ready.go:49] node "default-k8s-diff-port-070708" has status "Ready":"True"
	I1011 22:29:33.229615   77741 node_ready.go:38] duration metric: took 8.713422ms for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229623   77741 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:33.237626   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:33.298146   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:33.298166   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:33.308268   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.320862   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.346501   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:33.346536   77741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:33.406404   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.406435   77741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:33.480527   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.629133   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629162   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.629545   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.629564   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.629565   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.629616   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629625   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.630896   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.630904   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.630918   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.636620   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.636640   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.636979   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.636989   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.637001   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305476   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305507   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.305773   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.305798   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305809   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305821   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.306123   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.306168   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.306128   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.756210   77741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.275647241s)
	I1011 22:29:34.756257   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756271   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756536   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756558   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756567   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756575   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756844   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756891   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756911   77741 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-070708"
	I1011 22:29:34.756872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.759057   77741 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1011 22:29:33.148846   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:35.649536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:34.760328   77741 addons.go:510] duration metric: took 1.787917365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1011 22:29:34.764676   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:34.764703   77741 pod_ready.go:82] duration metric: took 1.527054334s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:34.764716   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773717   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.773739   77741 pod_ready.go:82] duration metric: took 1.009014594s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773747   77741 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779537   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.779554   77741 pod_ready.go:82] duration metric: took 5.801388ms for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779562   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785272   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:36.785302   77741 pod_ready.go:82] duration metric: took 1.005732291s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785316   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:38.790774   77741 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.790257   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.790285   77741 pod_ready.go:82] duration metric: took 4.004960127s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.790298   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794434   77741 pod_ready.go:93] pod "kube-proxy-f5jxp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.794457   77741 pod_ready.go:82] duration metric: took 4.15174ms for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794468   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797928   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.797942   77741 pod_ready.go:82] duration metric: took 3.468527ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797949   77741 pod_ready.go:39] duration metric: took 7.568316879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:40.797960   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:40.798002   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:40.813652   77741 api_server.go:72] duration metric: took 7.841294422s to wait for apiserver process to appear ...
	I1011 22:29:40.813672   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:40.813689   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:29:40.817412   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:29:40.818090   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:40.818107   77741 api_server.go:131] duration metric: took 4.42852ms to wait for apiserver health ...
	I1011 22:29:40.818114   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:40.823188   77741 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:40.823213   77741 system_pods.go:61] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:40.823221   77741 system_pods.go:61] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:40.823227   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:40.823233   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:40.823248   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:40.823255   77741 system_pods.go:61] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:40.823263   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:40.823273   77741 system_pods.go:61] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:40.823284   77741 system_pods.go:61] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:40.823296   77741 system_pods.go:74] duration metric: took 5.17626ms to wait for pod list to return data ...
	I1011 22:29:40.823307   77741 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:40.825321   77741 default_sa.go:45] found service account: "default"
	I1011 22:29:40.825336   77741 default_sa.go:55] duration metric: took 2.021143ms for default service account to be created ...
	I1011 22:29:40.825342   77741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:41.026940   77741 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:41.026968   77741 system_pods.go:89] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:41.026973   77741 system_pods.go:89] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:41.026978   77741 system_pods.go:89] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:41.026982   77741 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:41.026985   77741 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:41.026989   77741 system_pods.go:89] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:41.026992   77741 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:41.026998   77741 system_pods.go:89] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:41.027001   77741 system_pods.go:89] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:41.027009   77741 system_pods.go:126] duration metric: took 201.663243ms to wait for k8s-apps to be running ...
	I1011 22:29:41.027026   77741 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:41.027069   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:41.042219   77741 system_svc.go:56] duration metric: took 15.183864ms WaitForService to wait for kubelet
	I1011 22:29:41.042245   77741 kubeadm.go:582] duration metric: took 8.069890136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:41.042260   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:41.224020   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:41.224044   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:41.224057   77741 node_conditions.go:105] duration metric: took 181.791827ms to run NodePressure ...
	I1011 22:29:41.224070   77741 start.go:241] waiting for startup goroutines ...
	I1011 22:29:41.224078   77741 start.go:246] waiting for cluster config update ...
	I1011 22:29:41.224091   77741 start.go:255] writing updated cluster config ...
	I1011 22:29:41.224324   77741 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:41.270922   77741 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:41.272826   77741 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-070708" cluster and "default" namespace by default
	I1011 22:29:38.149579   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.648994   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:41.642042   77373 pod_ready.go:82] duration metric: took 4m0.000063385s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	E1011 22:29:41.642084   77373 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 22:29:41.642099   77373 pod_ready.go:39] duration metric: took 4m11.989411916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:41.642124   77373 kubeadm.go:597] duration metric: took 4m19.037142189s to restartPrimaryControlPlane
	W1011 22:29:41.642171   77373 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:29:41.642194   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:08.331378   77373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.689152762s)
	I1011 22:30:08.331467   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:08.348300   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:30:08.359480   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:08.370317   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:08.370344   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:08.370400   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:08.381317   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:08.381392   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:08.392591   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:08.403628   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:08.403695   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:08.415304   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.425512   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:08.425585   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.436525   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:08.447575   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:08.447644   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:08.458910   77373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:08.507988   77373 kubeadm.go:310] W1011 22:30:08.465544    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.508469   77373 kubeadm.go:310] W1011 22:30:08.466388    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.640893   77373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:16.843613   77373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:30:16.843665   77373 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:16.843739   77373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:16.843849   77373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:16.843963   77373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:30:16.844020   77373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:16.845663   77373 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:16.845745   77373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:16.845804   77373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:16.845880   77373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:16.845929   77373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:16.845994   77373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:16.846041   77373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:16.846094   77373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:16.846145   77373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:16.846207   77373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:16.846272   77373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:16.846305   77373 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:16.846355   77373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:16.846402   77373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:16.846453   77373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:30:16.846503   77373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:16.846566   77373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:16.846663   77373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:16.846762   77373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:16.846845   77373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:16.848425   77373 out.go:235]   - Booting up control plane ...
	I1011 22:30:16.848538   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:16.848673   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:16.848787   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:16.848925   77373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:16.849039   77373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:16.849076   77373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:16.849210   77373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:30:16.849351   77373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:30:16.849437   77373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.393174ms
	I1011 22:30:16.849498   77373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:30:16.849550   77373 kubeadm.go:310] [api-check] The API server is healthy after 5.001429588s
	I1011 22:30:16.849648   77373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:30:16.849781   77373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:30:16.849869   77373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:30:16.850052   77373 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-390487 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:30:16.850110   77373 kubeadm.go:310] [bootstrap-token] Using token: fihl2i.d50idwk2axnrw24u
	I1011 22:30:16.851665   77373 out.go:235]   - Configuring RBAC rules ...
	I1011 22:30:16.851802   77373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:30:16.851885   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:30:16.852036   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:30:16.852185   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:30:16.852323   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:30:16.852402   77373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:30:16.852499   77373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:30:16.852541   77373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:30:16.852580   77373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:30:16.852586   77373 kubeadm.go:310] 
	I1011 22:30:16.852634   77373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:30:16.852640   77373 kubeadm.go:310] 
	I1011 22:30:16.852705   77373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:30:16.852711   77373 kubeadm.go:310] 
	I1011 22:30:16.852732   77373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:30:16.852805   77373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:30:16.852878   77373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:30:16.852891   77373 kubeadm.go:310] 
	I1011 22:30:16.852990   77373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:30:16.853005   77373 kubeadm.go:310] 
	I1011 22:30:16.853073   77373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:30:16.853086   77373 kubeadm.go:310] 
	I1011 22:30:16.853162   77373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:30:16.853282   77373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:30:16.853341   77373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:30:16.853347   77373 kubeadm.go:310] 
	I1011 22:30:16.853424   77373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:30:16.853529   77373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:30:16.853540   77373 kubeadm.go:310] 
	I1011 22:30:16.853643   77373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.853789   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:30:16.853824   77373 kubeadm.go:310] 	--control-plane 
	I1011 22:30:16.853832   77373 kubeadm.go:310] 
	I1011 22:30:16.853954   77373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:30:16.853964   77373 kubeadm.go:310] 
	I1011 22:30:16.854083   77373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.854248   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:30:16.854264   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:30:16.854273   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:30:16.855848   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:30:16.857089   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:30:16.868823   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:30:16.895913   77373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:30:16.896017   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:16.896028   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-390487 minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=no-preload-390487 minikube.k8s.io/primary=true
	I1011 22:30:16.918531   77373 ops.go:34] apiserver oom_adj: -16
	I1011 22:30:17.097050   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:17.598029   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:18.098092   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:18.597526   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.098157   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.597575   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.097754   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.597957   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.097558   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.213123   77373 kubeadm.go:1113] duration metric: took 4.317171517s to wait for elevateKubeSystemPrivileges
	I1011 22:30:21.213168   77373 kubeadm.go:394] duration metric: took 4m58.664336163s to StartCluster
	I1011 22:30:21.213191   77373 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.213283   77373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:30:21.215630   77373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.215852   77373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:30:21.215989   77373 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:30:21.216063   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:30:21.216088   77373 addons.go:69] Setting storage-provisioner=true in profile "no-preload-390487"
	I1011 22:30:21.216109   77373 addons.go:234] Setting addon storage-provisioner=true in "no-preload-390487"
	I1011 22:30:21.216102   77373 addons.go:69] Setting default-storageclass=true in profile "no-preload-390487"
	W1011 22:30:21.216118   77373 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:30:21.216128   77373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-390487"
	I1011 22:30:21.216131   77373 addons.go:69] Setting metrics-server=true in profile "no-preload-390487"
	I1011 22:30:21.216171   77373 addons.go:234] Setting addon metrics-server=true in "no-preload-390487"
	W1011 22:30:21.216182   77373 addons.go:243] addon metrics-server should already be in state true
	I1011 22:30:21.216218   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216149   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216627   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216644   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216662   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216737   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.217280   77373 out.go:177] * Verifying Kubernetes components...
	I1011 22:30:21.218773   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:30:21.232485   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I1011 22:30:21.232801   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1011 22:30:21.233029   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233243   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233615   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233642   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233762   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233785   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233966   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234065   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234485   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234520   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.234611   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234669   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.235151   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1011 22:30:21.235614   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.236082   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.236106   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.236479   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.236777   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.240463   77373 addons.go:234] Setting addon default-storageclass=true in "no-preload-390487"
	W1011 22:30:21.240483   77373 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:30:21.240512   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.240874   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.240916   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.250949   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I1011 22:30:21.251469   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.251958   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.251983   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.252397   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.252586   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.253093   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1011 22:30:21.253443   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.253949   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.253966   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.254413   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.254479   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.254605   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.256241   77373 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:30:21.256246   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.257646   77373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:30:21.257651   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:30:21.257712   77373 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:30:21.257736   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.258740   77373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.258761   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:30:21.258779   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.259764   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1011 22:30:21.260129   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.260673   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.260697   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.261024   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.261691   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.261902   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.261949   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.262376   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.262401   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262655   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262698   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.262901   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263233   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.263339   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.263345   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263511   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.263523   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.263700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263807   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263942   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.302779   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1011 22:30:21.303319   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.303864   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.303888   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.304289   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.304516   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.306544   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.306810   77373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.306829   77373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:30:21.306852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.309788   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310242   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.310268   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310466   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.310646   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.310786   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.310911   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.439567   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:30:21.477421   77373 node_ready.go:35] waiting up to 6m0s for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.539701   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.544312   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.548001   77373 node_ready.go:49] node "no-preload-390487" has status "Ready":"True"
	I1011 22:30:21.548022   77373 node_ready.go:38] duration metric: took 70.568638ms for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.548032   77373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:21.576393   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:21.585171   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:30:21.585197   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:30:21.681671   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:30:21.681698   77373 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:30:21.725963   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:21.725988   77373 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:30:21.759564   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:22.490072   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490099   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490219   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490236   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490470   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490494   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490504   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490512   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490596   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490596   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490627   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490642   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490653   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490883   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490899   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490922   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490981   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490996   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.491008   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.509939   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.509972   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.510355   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.510371   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.510421   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:23.029621   77373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.270011552s)
	I1011 22:30:23.029675   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.029691   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.029972   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.029989   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.029999   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.030008   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.030228   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.030242   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.030253   77373 addons.go:475] Verifying addon metrics-server=true in "no-preload-390487"
	I1011 22:30:23.031821   77373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1011 22:30:23.033206   77373 addons.go:510] duration metric: took 1.817229636s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1011 22:30:23.583317   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.583341   77373 pod_ready.go:82] duration metric: took 2.006915507s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.583350   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588077   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.588094   77373 pod_ready.go:82] duration metric: took 4.738751ms for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588103   77373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592411   77373 pod_ready.go:93] pod "etcd-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.592429   77373 pod_ready.go:82] duration metric: took 4.320594ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592437   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:25.599226   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:28.107173   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:29.598395   77373 pod_ready.go:93] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.598422   77373 pod_ready.go:82] duration metric: took 6.005976584s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.598438   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603104   77373 pod_ready.go:93] pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.603123   77373 pod_ready.go:82] duration metric: took 4.67757ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603133   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606558   77373 pod_ready.go:93] pod "kube-proxy-4g8nw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.606574   77373 pod_ready.go:82] duration metric: took 3.433207ms for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606582   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610559   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.610575   77373 pod_ready.go:82] duration metric: took 3.985639ms for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610582   77373 pod_ready.go:39] duration metric: took 8.062539556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:29.610598   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:30:29.610667   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:30:29.625884   77373 api_server.go:72] duration metric: took 8.409998013s to wait for apiserver process to appear ...
	I1011 22:30:29.625906   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:30:29.625925   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:30:29.629905   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:30:29.631557   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:30:29.631575   77373 api_server.go:131] duration metric: took 5.661997ms to wait for apiserver health ...
	I1011 22:30:29.631583   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:30:29.637936   77373 system_pods.go:59] 9 kube-system pods found
	I1011 22:30:29.637963   77373 system_pods.go:61] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.637970   77373 system_pods.go:61] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.637974   77373 system_pods.go:61] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.637979   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.637984   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.637989   77373 system_pods.go:61] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.637997   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.638010   77373 system_pods.go:61] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.638018   77373 system_pods.go:61] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.638027   77373 system_pods.go:74] duration metric: took 6.437989ms to wait for pod list to return data ...
	I1011 22:30:29.638034   77373 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:30:29.640483   77373 default_sa.go:45] found service account: "default"
	I1011 22:30:29.640499   77373 default_sa.go:55] duration metric: took 2.455351ms for default service account to be created ...
	I1011 22:30:29.640508   77373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:30:29.800014   77373 system_pods.go:86] 9 kube-system pods found
	I1011 22:30:29.800043   77373 system_pods.go:89] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.800049   77373 system_pods.go:89] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.800053   77373 system_pods.go:89] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.800057   77373 system_pods.go:89] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.800060   77373 system_pods.go:89] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.800064   77373 system_pods.go:89] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.800069   77373 system_pods.go:89] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.800074   77373 system_pods.go:89] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.800078   77373 system_pods.go:89] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.800086   77373 system_pods.go:126] duration metric: took 159.572896ms to wait for k8s-apps to be running ...
	I1011 22:30:29.800093   77373 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:30:29.800138   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:29.815064   77373 system_svc.go:56] duration metric: took 14.962996ms WaitForService to wait for kubelet
	I1011 22:30:29.815090   77373 kubeadm.go:582] duration metric: took 8.599206932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:30:29.815106   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:30:29.997185   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:30:29.997214   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:30:29.997224   77373 node_conditions.go:105] duration metric: took 182.114064ms to run NodePressure ...
	I1011 22:30:29.997235   77373 start.go:241] waiting for startup goroutines ...
	I1011 22:30:29.997242   77373 start.go:246] waiting for cluster config update ...
	I1011 22:30:29.997254   77373 start.go:255] writing updated cluster config ...
	I1011 22:30:29.997529   77373 ssh_runner.go:195] Run: rm -f paused
	I1011 22:30:30.044917   77373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:30:30.046918   77373 out.go:177] * Done! kubectl is now configured to use "no-preload-390487" cluster and "default" namespace by default
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 
	
	
	==> CRI-O <==
	Oct 11 22:38:19 embed-certs-223942 crio[709]: time="2024-10-11 22:38:19.970004025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686299969980066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43736f74-ce12-4556-8df6-f2819c928ef3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:19 embed-certs-223942 crio[709]: time="2024-10-11 22:38:19.970450612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ef21331-eb88-4e2c-a802-ef6b9aa7d209 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:19 embed-certs-223942 crio[709]: time="2024-10-11 22:38:19.970503969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ef21331-eb88-4e2c-a802-ef6b9aa7d209 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:19 embed-certs-223942 crio[709]: time="2024-10-11 22:38:19.970774419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ef21331-eb88-4e2c-a802-ef6b9aa7d209 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.012477268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c384e9bd-bbec-47da-898c-9e9a84cbcc72 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.012549398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c384e9bd-bbec-47da-898c-9e9a84cbcc72 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.013697904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7914799f-502c-47ba-8030-66ac40089dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.014185829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686300014163320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7914799f-502c-47ba-8030-66ac40089dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.014638065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fb71aa0-a094-4dc6-97f9-9e358fd23428 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.014691749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fb71aa0-a094-4dc6-97f9-9e358fd23428 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.015204085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fb71aa0-a094-4dc6-97f9-9e358fd23428 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.054332461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfe1b9c5-3d93-4a22-9bb3-4feeded38a09 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.054426453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfe1b9c5-3d93-4a22-9bb3-4feeded38a09 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.055683838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f35c1d48-2fa5-4307-920b-44d4a75e1eae name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.056123838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686300056102799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f35c1d48-2fa5-4307-920b-44d4a75e1eae name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.056670918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=035054d5-61fb-43e1-abc9-b3966a7d30e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.056817323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=035054d5-61fb-43e1-abc9-b3966a7d30e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.057026967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=035054d5-61fb-43e1-abc9-b3966a7d30e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.093516484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36b6fa64-8405-4d80-82d3-c9201be4e523 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.093662960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36b6fa64-8405-4d80-82d3-c9201be4e523 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.094516309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64b5bb5c-5d16-45b2-9477-d0426b610ea1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.094943117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686300094923156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64b5bb5c-5d16-45b2-9477-d0426b610ea1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.095357119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41b145d7-dea9-4a57-8094-718158d19b24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.095406382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41b145d7-dea9-4a57-8094-718158d19b24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:20 embed-certs-223942 crio[709]: time="2024-10-11 22:38:20.095601051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41b145d7-dea9-4a57-8094-718158d19b24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	73cb9493c5b87       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ef9d32181014b       coredns-7c65d6cfc9-qcct7
	1aac20bc993c9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   5592caa5415ef       coredns-7c65d6cfc9-bchd4
	6e3de90b28419       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   908491122f8de       storage-provisioner
	415392e78e166       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   7c7053b874069       kube-proxy-8qv4k
	b8f7d4a5b42d2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   ec489aa35ca0a       kube-scheduler-embed-certs-223942
	572eaaff73b31       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   153e960b9abe6       etcd-embed-certs-223942
	6998c3a00bca0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   e7dfebeb54ef4       kube-controller-manager-embed-certs-223942
	d211bb9e2c693       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   93356c94c9d18       kube-apiserver-embed-certs-223942
	885cb2372a071       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   5670c3d78eb5b       kube-apiserver-embed-certs-223942
	
	
	==> coredns [1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-223942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-223942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=embed-certs-223942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:29:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-223942
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:38:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:34:22 +0000   Fri, 11 Oct 2024 22:29:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:34:22 +0000   Fri, 11 Oct 2024 22:29:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:34:22 +0000   Fri, 11 Oct 2024 22:29:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:34:22 +0000   Fri, 11 Oct 2024 22:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.238
	  Hostname:    embed-certs-223942
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 69c0ac3f57ef4f8e90a55b80a28acbfc
	  System UUID:                69c0ac3f-57ef-4f8e-90a5-5b80a28acbfc
	  Boot ID:                    e156c070-0f57-421d-b90c-d63d5affe806
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bchd4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 coredns-7c65d6cfc9-qcct7                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 etcd-embed-certs-223942                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-embed-certs-223942             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-controller-manager-embed-certs-223942    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-proxy-8qv4k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-embed-certs-223942             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-5s6hn               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m7s   kube-proxy       
	  Normal  Starting                 9m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m14s  kubelet          Node embed-certs-223942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s  kubelet          Node embed-certs-223942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s  kubelet          Node embed-certs-223942 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s  node-controller  Node embed-certs-223942 event: Registered Node embed-certs-223942 in Controller
	
	
	==> dmesg <==
	[  +0.051075] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040291] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.856390] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.412189] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.615501] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct11 22:24] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.062937] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065859] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.176872] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.164928] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.309075] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[  +4.189975] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +1.846284] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.061722] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.500288] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.384181] kauditd_printk_skb: 85 callbacks suppressed
	[Oct11 22:28] systemd-fstab-generator[2569]: Ignoring "noauto" option for root device
	[  +0.059758] kauditd_printk_skb: 9 callbacks suppressed
	[Oct11 22:29] systemd-fstab-generator[2890]: Ignoring "noauto" option for root device
	[  +0.096253] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.812748] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[  +0.773515] kauditd_printk_skb: 34 callbacks suppressed
	[  +9.654875] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056] <==
	{"level":"info","ts":"2024-10-11T22:29:01.466111Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T22:29:01.468749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.238:2380"}
	{"level":"info","ts":"2024-10-11T22:29:01.468786Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.238:2380"}
	{"level":"info","ts":"2024-10-11T22:29:01.469056Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e2f0763a23b2a427","initial-advertise-peer-urls":["https://192.168.72.238:2380"],"listen-peer-urls":["https://192.168.72.238:2380"],"advertise-client-urls":["https://192.168.72.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:29:01.469111Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:29:01.483786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-11T22:29:01.483944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-11T22:29:01.483981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 received MsgPreVoteResp from e2f0763a23b2a427 at term 1"}
	{"level":"info","ts":"2024-10-11T22:29:01.484017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:01.484054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 received MsgVoteResp from e2f0763a23b2a427 at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:01.484082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became leader at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:01.484106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e2f0763a23b2a427 elected leader e2f0763a23b2a427 at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:01.488004Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e2f0763a23b2a427","local-member-attributes":"{Name:embed-certs-223942 ClientURLs:[https://192.168.72.238:2379]}","request-path":"/0/members/e2f0763a23b2a427/attributes","cluster-id":"fce591e0af426ce5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:29:01.488753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:29:01.489140Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:01.490751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:29:01.490957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:01.490990Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:01.491558Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:01.494295Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:01.495020Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:29:01.495192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.238:2379"}
	{"level":"info","ts":"2024-10-11T22:29:01.495274Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fce591e0af426ce5","local-member-id":"e2f0763a23b2a427","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:01.507564Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:01.507621Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:38:20 up 14 min,  0 users,  load average: 0.03, 0.09, 0.08
	Linux embed-certs-223942 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc] <==
	W1011 22:28:53.601529       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.632043       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.636636       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.647270       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.679589       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.787872       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.799449       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.806947       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.821617       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.850538       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.934477       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.954034       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.980095       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.992506       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.012112       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.033530       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.060198       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.109887       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.180535       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.193196       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.401835       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.432394       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.640242       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:57.368433       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:58.196914       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8] <==
	W1011 22:34:04.931334       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:34:04.931538       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:34:04.932377       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:34:04.933478       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:35:04.932828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:35:04.932981       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:35:04.933951       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:35:04.934128       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:35:04.934341       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:35:04.935335       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:37:04.935457       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:37:04.935822       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:37:04.935901       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:37:04.936010       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:37:04.936993       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:37:04.937221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249] <==
	E1011 22:33:10.886476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:33:11.354287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:33:40.894072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:33:41.361224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:34:10.901918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:34:11.369370       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:34:22.081061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-223942"
	E1011 22:34:40.908669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:34:41.379436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:34:51.687322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="211.206µs"
	I1011 22:35:04.686635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="90.592µs"
	E1011 22:35:10.915694       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:35:11.388514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:35:40.922249       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:35:41.398095       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:36:10.930583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:36:11.407284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:36:40.937308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:36:41.415306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:37:10.944047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:37:11.423237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:37:40.950014       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:37:41.431197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:38:10.956802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:38:11.439346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:29:12.575187       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:29:12.591672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.238"]
	E1011 22:29:12.591937       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:29:12.663029       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:29:12.663114       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:29:12.663195       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:29:12.667410       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:29:12.667793       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:29:12.667807       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:29:12.671088       1 config.go:199] "Starting service config controller"
	I1011 22:29:12.671146       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:29:12.671181       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:29:12.671197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:29:12.672070       1 config.go:328] "Starting node config controller"
	I1011 22:29:12.672141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:29:12.772166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 22:29:12.772190       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:29:12.772208       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea] <==
	W1011 22:29:04.771428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 22:29:04.771483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.807438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:04.807491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.810891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:29:04.810942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.822192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:29:04.822326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.867028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:04.867237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.911147       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:29:04.911304       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 22:29:04.924931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 22:29:04.925124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.953828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:04.953878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.990061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 22:29:04.990124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:05.105943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:29:05.106091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:05.157158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 22:29:05.157522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:05.280174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 22:29:05.280789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1011 22:29:07.934588       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:37:06 embed-certs-223942 kubelet[2897]: E1011 22:37:06.775461    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686226775152672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:16 embed-certs-223942 kubelet[2897]: E1011 22:37:16.779864    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686236779355080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:16 embed-certs-223942 kubelet[2897]: E1011 22:37:16.779906    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686236779355080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:19 embed-certs-223942 kubelet[2897]: E1011 22:37:19.671685    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:37:26 embed-certs-223942 kubelet[2897]: E1011 22:37:26.782032    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686246781518538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:26 embed-certs-223942 kubelet[2897]: E1011 22:37:26.782414    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686246781518538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:33 embed-certs-223942 kubelet[2897]: E1011 22:37:33.672816    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:37:36 embed-certs-223942 kubelet[2897]: E1011 22:37:36.784251    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686256783543286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:36 embed-certs-223942 kubelet[2897]: E1011 22:37:36.784352    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686256783543286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:45 embed-certs-223942 kubelet[2897]: E1011 22:37:45.671664    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:37:46 embed-certs-223942 kubelet[2897]: E1011 22:37:46.786450    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686266786170292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:46 embed-certs-223942 kubelet[2897]: E1011 22:37:46.786756    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686266786170292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:56 embed-certs-223942 kubelet[2897]: E1011 22:37:56.789098    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686276788524255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:56 embed-certs-223942 kubelet[2897]: E1011 22:37:56.789141    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686276788524255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:59 embed-certs-223942 kubelet[2897]: E1011 22:37:59.672117    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]: E1011 22:38:06.694198    2897 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]: E1011 22:38:06.790250    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686286789921599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:06 embed-certs-223942 kubelet[2897]: E1011 22:38:06.790286    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686286789921599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:12 embed-certs-223942 kubelet[2897]: E1011 22:38:12.671150    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:38:16 embed-certs-223942 kubelet[2897]: E1011 22:38:16.792120    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686296791797214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:16 embed-certs-223942 kubelet[2897]: E1011 22:38:16.792378    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686296791797214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761] <==
	I1011 22:29:13.061397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:29:13.071174       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:29:13.071374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:29:13.080887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:29:13.081101       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-223942_2f3c7e46-1b65-4011-a8b1-d04225923a21!
	I1011 22:29:13.083659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6324dea6-af7b-49be-b2ed-a9f9889bb6a5", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-223942_2f3c7e46-1b65-4011-a8b1-d04225923a21 became leader
	I1011 22:29:13.181855       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-223942_2f3c7e46-1b65-4011-a8b1-d04225923a21!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223942 -n embed-certs-223942
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-223942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5s6hn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-223942 describe pod metrics-server-6867b74b74-5s6hn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-223942 describe pod metrics-server-6867b74b74-5s6hn: exit status 1 (57.110174ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5s6hn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-223942 describe pod metrics-server-6867b74b74-5s6hn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1011 22:30:11.426589   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:30:24.492918   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-11 22:38:41.791843005 +0000 UTC m=+6041.806200506
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-070708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-070708 logs -n 25: (1.92222131s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo cat                              | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:20:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:39.118864   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:20:45.198865   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:48.270849   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:54.350871   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:57.422868   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:03.502801   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:06.574950   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:12.654900   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:15.726940   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:21.806892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:24.878947   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:30.958903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:34.030961   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:40.110909   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:43.182869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:49.262857   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:52.334903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:58.414892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:01.486914   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:07.566885   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:10.638888   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:16.718908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:19.790874   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:25.870893   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:28.942886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:35.022875   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:38.094889   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:44.174898   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:47.246907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:53.326869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:56.398883   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:02.482839   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:05.550858   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:11.630908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:14.702895   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:20.782925   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:23.854907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:29.934886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:33.006820   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:39.086906   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:42.158938   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:45.162974   77526 start.go:364] duration metric: took 4m27.722613931s to acquireMachinesLock for "embed-certs-223942"
	I1011 22:23:45.163058   77526 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:23:45.163081   77526 fix.go:54] fixHost starting: 
	I1011 22:23:45.163410   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:23:45.163459   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:23:45.178675   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1011 22:23:45.179157   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:23:45.179600   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:23:45.179620   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:23:45.179959   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:23:45.180200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:23:45.180348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:23:45.182134   77526 fix.go:112] recreateIfNeeded on embed-certs-223942: state=Stopped err=<nil>
	I1011 22:23:45.182159   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	W1011 22:23:45.182305   77526 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:23:45.184160   77526 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223942" ...
	I1011 22:23:45.185640   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Start
	I1011 22:23:45.185844   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring networks are active...
	I1011 22:23:45.186700   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network default is active
	I1011 22:23:45.187125   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network mk-embed-certs-223942 is active
	I1011 22:23:45.187499   77526 main.go:141] libmachine: (embed-certs-223942) Getting domain xml...
	I1011 22:23:45.188220   77526 main.go:141] libmachine: (embed-certs-223942) Creating domain...
	I1011 22:23:46.400681   77526 main.go:141] libmachine: (embed-certs-223942) Waiting to get IP...
	I1011 22:23:46.401694   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.402146   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.402226   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.402142   78768 retry.go:31] will retry after 262.164449ms: waiting for machine to come up
	I1011 22:23:46.665716   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.666177   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.666204   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.666139   78768 retry.go:31] will retry after 264.99316ms: waiting for machine to come up
	I1011 22:23:46.932771   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.933128   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.933167   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.933084   78768 retry.go:31] will retry after 388.243159ms: waiting for machine to come up
	I1011 22:23:47.322648   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.323103   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.323165   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.323047   78768 retry.go:31] will retry after 374.999199ms: waiting for machine to come up
	I1011 22:23:45.160618   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:23:45.160654   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.160935   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:23:45.160960   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.161145   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:23:45.162838   77373 machine.go:96] duration metric: took 4m37.426000052s to provisionDockerMachine
	I1011 22:23:45.162876   77373 fix.go:56] duration metric: took 4m37.446804874s for fixHost
	I1011 22:23:45.162886   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 4m37.446840276s
	W1011 22:23:45.162906   77373 start.go:714] error starting host: provision: host is not running
	W1011 22:23:45.163008   77373 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1011 22:23:45.163018   77373 start.go:729] Will try again in 5 seconds ...
	I1011 22:23:47.699684   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.700088   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.700117   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.700031   78768 retry.go:31] will retry after 589.703952ms: waiting for machine to come up
	I1011 22:23:48.291928   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.292398   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.292422   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.292351   78768 retry.go:31] will retry after 671.971303ms: waiting for machine to come up
	I1011 22:23:48.966357   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.966772   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.966797   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.966738   78768 retry.go:31] will retry after 848.2726ms: waiting for machine to come up
	I1011 22:23:49.816735   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:49.817155   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:49.817181   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:49.817116   78768 retry.go:31] will retry after 941.163438ms: waiting for machine to come up
	I1011 22:23:50.759625   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:50.760052   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:50.760095   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:50.759996   78768 retry.go:31] will retry after 1.225047114s: waiting for machine to come up
	I1011 22:23:51.987349   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:51.987788   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:51.987817   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:51.987737   78768 retry.go:31] will retry after 2.184212352s: waiting for machine to come up
	I1011 22:23:50.165493   77373 start.go:360] acquireMachinesLock for no-preload-390487: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:23:54.173125   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:54.173564   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:54.173595   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:54.173503   78768 retry.go:31] will retry after 2.000096312s: waiting for machine to come up
	I1011 22:23:56.176004   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:56.176458   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:56.176488   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:56.176403   78768 retry.go:31] will retry after 3.062345768s: waiting for machine to come up
	I1011 22:23:59.239982   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:59.240426   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:59.240452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:59.240386   78768 retry.go:31] will retry after 4.019746049s: waiting for machine to come up
	I1011 22:24:04.643399   77741 start.go:364] duration metric: took 4m21.087318573s to acquireMachinesLock for "default-k8s-diff-port-070708"
	I1011 22:24:04.643463   77741 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:04.643471   77741 fix.go:54] fixHost starting: 
	I1011 22:24:04.643903   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:04.643950   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:04.660647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1011 22:24:04.661106   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:04.661603   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:24:04.661627   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:04.661966   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:04.662148   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:04.662392   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:24:04.664004   77741 fix.go:112] recreateIfNeeded on default-k8s-diff-port-070708: state=Stopped err=<nil>
	I1011 22:24:04.664048   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	W1011 22:24:04.664205   77741 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:04.666462   77741 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-070708" ...
	I1011 22:24:03.263908   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264434   77526 main.go:141] libmachine: (embed-certs-223942) Found IP for machine: 192.168.72.238
	I1011 22:24:03.264467   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has current primary IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264476   77526 main.go:141] libmachine: (embed-certs-223942) Reserving static IP address...
	I1011 22:24:03.264932   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.264964   77526 main.go:141] libmachine: (embed-certs-223942) Reserved static IP address: 192.168.72.238
	I1011 22:24:03.264984   77526 main.go:141] libmachine: (embed-certs-223942) DBG | skip adding static IP to network mk-embed-certs-223942 - found existing host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"}
	I1011 22:24:03.264995   77526 main.go:141] libmachine: (embed-certs-223942) Waiting for SSH to be available...
	I1011 22:24:03.265018   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Getting to WaitForSSH function...
	I1011 22:24:03.267171   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267556   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.267594   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267682   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH client type: external
	I1011 22:24:03.267720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa (-rw-------)
	I1011 22:24:03.267747   77526 main.go:141] libmachine: (embed-certs-223942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:03.267760   77526 main.go:141] libmachine: (embed-certs-223942) DBG | About to run SSH command:
	I1011 22:24:03.267767   77526 main.go:141] libmachine: (embed-certs-223942) DBG | exit 0
	I1011 22:24:03.390641   77526 main.go:141] libmachine: (embed-certs-223942) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:03.390996   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetConfigRaw
	I1011 22:24:03.391600   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.393909   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394224   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.394267   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394510   77526 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:24:03.394735   77526 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:03.394754   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:03.394941   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.396974   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397280   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.397298   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397414   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.397577   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397724   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397856   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.398095   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.398276   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.398285   77526 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:03.503029   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:03.503063   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503282   77526 buildroot.go:166] provisioning hostname "embed-certs-223942"
	I1011 22:24:03.503301   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503503   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.505943   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506300   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.506325   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506444   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.506595   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506769   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506899   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.507087   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.507247   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.507261   77526 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223942 && echo "embed-certs-223942" | sudo tee /etc/hostname
	I1011 22:24:03.626937   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223942
	
	I1011 22:24:03.626970   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.629752   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630038   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.630067   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630194   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.630370   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630665   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.630805   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.630988   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.631011   77526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223942/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:03.744196   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:03.744224   77526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:03.744247   77526 buildroot.go:174] setting up certificates
	I1011 22:24:03.744258   77526 provision.go:84] configureAuth start
	I1011 22:24:03.744270   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.744535   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.747114   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.747479   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747619   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.750238   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750626   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.750662   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750801   77526 provision.go:143] copyHostCerts
	I1011 22:24:03.750867   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:03.750890   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:03.750970   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:03.751094   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:03.751108   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:03.751146   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:03.751246   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:03.751257   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:03.751288   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:03.751360   77526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223942 san=[127.0.0.1 192.168.72.238 embed-certs-223942 localhost minikube]
	I1011 22:24:04.039983   77526 provision.go:177] copyRemoteCerts
	I1011 22:24:04.040046   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:04.040072   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.042846   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043130   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.043151   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043339   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.043530   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.043689   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.043836   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.124533   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:04.148503   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:24:04.172199   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:04.195175   77526 provision.go:87] duration metric: took 450.888581ms to configureAuth
	I1011 22:24:04.195203   77526 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:04.195381   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:04.195446   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.197839   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198189   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.198269   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.198561   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198730   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198875   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.199041   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.199217   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.199237   77526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:04.411621   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:04.411653   77526 machine.go:96] duration metric: took 1.016905055s to provisionDockerMachine
	I1011 22:24:04.411667   77526 start.go:293] postStartSetup for "embed-certs-223942" (driver="kvm2")
	I1011 22:24:04.411680   77526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:04.411699   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.411977   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:04.412003   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.414381   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414679   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.414722   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414835   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.415010   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.415144   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.415266   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.496916   77526 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:04.500935   77526 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:04.500956   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:04.501023   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:04.501115   77526 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:04.501222   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:04.510266   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:04.537636   77526 start.go:296] duration metric: took 125.956397ms for postStartSetup
	I1011 22:24:04.537678   77526 fix.go:56] duration metric: took 19.374596283s for fixHost
	I1011 22:24:04.537698   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.540344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540719   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.540742   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540838   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.541012   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541160   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541316   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.541474   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.541648   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.541659   77526 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:04.643243   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685444.617606783
	
	I1011 22:24:04.643266   77526 fix.go:216] guest clock: 1728685444.617606783
	I1011 22:24:04.643273   77526 fix.go:229] Guest: 2024-10-11 22:24:04.617606783 +0000 UTC Remote: 2024-10-11 22:24:04.537682618 +0000 UTC m=+287.234553168 (delta=79.924165ms)
	I1011 22:24:04.643312   77526 fix.go:200] guest clock delta is within tolerance: 79.924165ms
	I1011 22:24:04.643320   77526 start.go:83] releasing machines lock for "embed-certs-223942", held for 19.480305529s
	I1011 22:24:04.643344   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.643569   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:04.646344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646733   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.646766   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646918   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647366   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647519   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647644   77526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:04.647693   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.647723   77526 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:04.647748   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.649992   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650329   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650354   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650378   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650509   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.650676   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.650750   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650773   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650857   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.650959   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.651027   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.651081   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.651200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.651313   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.756500   77526 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:04.762420   77526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:04.901155   77526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:04.908234   77526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:04.908304   77526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:04.929972   77526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:04.929999   77526 start.go:495] detecting cgroup driver to use...
	I1011 22:24:04.930069   77526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:04.946899   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:04.960670   77526 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:04.960739   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:04.973981   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:04.987444   77526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:05.103114   77526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:05.251587   77526 docker.go:233] disabling docker service ...
	I1011 22:24:05.251662   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:05.266087   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:05.279209   77526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:05.431467   77526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:05.571151   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:05.584813   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:05.603563   77526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:05.603632   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.614924   77526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:05.614979   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.627625   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.638259   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.651521   77526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:05.663937   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.674307   77526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.696935   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.707464   77526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:05.717338   77526 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:05.717416   77526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:05.737811   77526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:05.749453   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:05.888144   77526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:05.984321   77526 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:05.984382   77526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:05.989389   77526 start.go:563] Will wait 60s for crictl version
	I1011 22:24:05.989447   77526 ssh_runner.go:195] Run: which crictl
	I1011 22:24:05.993333   77526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:06.033281   77526 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:06.033366   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.062164   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.092927   77526 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:06.094094   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:06.097442   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.097893   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:06.097941   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.098179   77526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:06.102566   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:06.116183   77526 kubeadm.go:883] updating cluster {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:06.116297   77526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:06.116347   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:06.164193   77526 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:06.164272   77526 ssh_runner.go:195] Run: which lz4
	I1011 22:24:06.168557   77526 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:06.173131   77526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:06.173165   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:04.667909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Start
	I1011 22:24:04.668056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring networks are active...
	I1011 22:24:04.668688   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network default is active
	I1011 22:24:04.668985   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network mk-default-k8s-diff-port-070708 is active
	I1011 22:24:04.669312   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Getting domain xml...
	I1011 22:24:04.669964   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Creating domain...
	I1011 22:24:05.931094   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting to get IP...
	I1011 22:24:05.932142   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932635   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932711   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:05.932622   78901 retry.go:31] will retry after 199.659438ms: waiting for machine to come up
	I1011 22:24:06.134036   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134479   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134504   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.134439   78901 retry.go:31] will retry after 379.083732ms: waiting for machine to come up
	I1011 22:24:06.515118   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515656   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515686   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.515599   78901 retry.go:31] will retry after 302.733712ms: waiting for machine to come up
	I1011 22:24:06.820188   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820629   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820657   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.820579   78901 retry.go:31] will retry after 466.059846ms: waiting for machine to come up
	I1011 22:24:07.288837   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289371   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.289302   78901 retry.go:31] will retry after 551.760501ms: waiting for machine to come up
	I1011 22:24:07.843026   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843561   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843590   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.843517   78901 retry.go:31] will retry after 626.896356ms: waiting for machine to come up
	I1011 22:24:07.621882   77526 crio.go:462] duration metric: took 1.453355137s to copy over tarball
	I1011 22:24:07.621973   77526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:09.732789   77526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110786947s)
	I1011 22:24:09.732823   77526 crio.go:469] duration metric: took 2.110914695s to extract the tarball
	I1011 22:24:09.732831   77526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:09.768649   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:09.811856   77526 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:09.811881   77526 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:09.811890   77526 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I1011 22:24:09.811991   77526 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:09.812087   77526 ssh_runner.go:195] Run: crio config
	I1011 22:24:09.857847   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:09.857869   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:09.857877   77526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:09.857896   77526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223942 NodeName:embed-certs-223942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:09.858025   77526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:09.858082   77526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:09.868276   77526 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:09.868346   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:09.877682   77526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1011 22:24:09.894551   77526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:09.911181   77526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1011 22:24:09.927972   77526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:09.931799   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:09.943650   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:10.071890   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:10.089627   77526 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942 for IP: 192.168.72.238
	I1011 22:24:10.089658   77526 certs.go:194] generating shared ca certs ...
	I1011 22:24:10.089680   77526 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:10.089851   77526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:10.089905   77526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:10.089916   77526 certs.go:256] generating profile certs ...
	I1011 22:24:10.090038   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/client.key
	I1011 22:24:10.090121   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key.0dabc30d
	I1011 22:24:10.090163   77526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key
	I1011 22:24:10.090323   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:10.090354   77526 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:10.090364   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:10.090392   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:10.090415   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:10.090438   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:10.090476   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:10.091225   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:10.117879   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:10.169586   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:10.210385   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:10.245240   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:24:10.274354   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:10.299943   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:10.324265   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:10.347352   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:10.370252   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:10.393715   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:10.420103   77526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:10.436668   77526 ssh_runner.go:195] Run: openssl version
	I1011 22:24:10.442525   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:10.453055   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457461   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457520   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.463121   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:10.473623   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:10.483653   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488022   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488075   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.493553   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:10.503833   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:10.514171   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518935   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518983   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.524479   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:10.534942   77526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:10.539385   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:10.545178   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:10.550886   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:10.556533   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:10.562024   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:10.567514   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:10.573018   77526 kubeadm.go:392] StartCluster: {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:10.573136   77526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:10.573206   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.609216   77526 cri.go:89] found id: ""
	I1011 22:24:10.609291   77526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:10.619945   77526 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:10.619976   77526 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:10.620024   77526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:10.629748   77526 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:10.631292   77526 kubeconfig.go:125] found "embed-certs-223942" server: "https://192.168.72.238:8443"
	I1011 22:24:10.634516   77526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:10.644773   77526 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.238
	I1011 22:24:10.644805   77526 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:10.644821   77526 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:10.644874   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.680074   77526 cri.go:89] found id: ""
	I1011 22:24:10.680146   77526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:10.696118   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:10.705765   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:10.705789   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:10.705845   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:10.714771   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:10.714837   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:10.724255   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:10.733433   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:10.733490   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:10.742649   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.751287   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:10.751350   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.760572   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:10.769447   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:10.769517   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:10.778829   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:10.788208   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:10.900288   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.733461   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.929225   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.001383   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.093971   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:12.094053   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:08.471765   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472154   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472178   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:08.472099   78901 retry.go:31] will retry after 1.132732814s: waiting for machine to come up
	I1011 22:24:09.606499   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607030   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:09.606975   78901 retry.go:31] will retry after 1.289031778s: waiting for machine to come up
	I1011 22:24:10.897474   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.897980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.898005   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:10.897925   78901 retry.go:31] will retry after 1.601197893s: waiting for machine to come up
	I1011 22:24:12.500563   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501072   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501100   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:12.501018   78901 retry.go:31] will retry after 1.772496409s: waiting for machine to come up
	I1011 22:24:12.594492   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.094823   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.594502   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.095004   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.109230   77526 api_server.go:72] duration metric: took 2.015258789s to wait for apiserver process to appear ...
	I1011 22:24:14.109265   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:14.109291   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.439696   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.439731   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.439747   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.515797   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.515834   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.610033   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.620048   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:16.620093   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.109593   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.116698   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.116729   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.609486   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.628000   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.628031   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:18.109663   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:18.115996   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:24:18.121780   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:18.121806   77526 api_server.go:131] duration metric: took 4.012533784s to wait for apiserver health ...
	I1011 22:24:18.121816   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:18.121823   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:18.123838   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:14.275892   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276364   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:14.276305   78901 retry.go:31] will retry after 2.71082021s: waiting for machine to come up
	I1011 22:24:16.989033   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989560   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989591   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:16.989521   78901 retry.go:31] will retry after 2.502509628s: waiting for machine to come up
	I1011 22:24:18.125325   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:18.137257   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:18.154806   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:18.164291   77526 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:18.164318   77526 system_pods.go:61] "coredns-7c65d6cfc9-w8zgx" [4a8fab25-6b1a-424f-982c-2def533eb1ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:18.164325   77526 system_pods.go:61] "etcd-embed-certs-223942" [95c77be2-4ed2-45b5-b1ad-abbd3bc6de78] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:18.164332   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [51fd81a8-25e1-4d2f-b6dc-42e1b277de54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:18.164338   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [17eda746-891b-44aa-800c-fabd818db753] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:18.164357   77526 system_pods.go:61] "kube-proxy-xz284" [a24b20d5-45dd-476c-8c91-07fd5cea511b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:18.164368   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [91bf2256-7d6e-4831-aab5-d59c4f801fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:18.164382   77526 system_pods.go:61] "metrics-server-6867b74b74-9xr4k" [fc1a267e-3cb7-40f6-8908-5b304f8f5b92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:18.164398   77526 system_pods.go:61] "storage-provisioner" [77ed79d9-66ba-4262-a972-e23ce8d1878c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:18.164412   77526 system_pods.go:74] duration metric: took 9.584328ms to wait for pod list to return data ...
	I1011 22:24:18.164421   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:18.167630   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:18.167650   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:18.167660   77526 node_conditions.go:105] duration metric: took 3.235822ms to run NodePressure ...
	I1011 22:24:18.167675   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:18.453597   77526 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457919   77526 kubeadm.go:739] kubelet initialised
	I1011 22:24:18.457937   77526 kubeadm.go:740] duration metric: took 4.320725ms waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457944   77526 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:18.462432   77526 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.468402   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468426   77526 pod_ready.go:82] duration metric: took 5.974992ms for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.468435   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468441   77526 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.475031   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475048   77526 pod_ready.go:82] duration metric: took 6.600211ms for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.475056   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475061   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.479729   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479748   77526 pod_ready.go:82] duration metric: took 4.679509ms for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.479756   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479762   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:20.487624   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:19.494990   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495353   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495384   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:19.495311   78901 retry.go:31] will retry after 2.761894966s: waiting for machine to come up
	I1011 22:24:22.260471   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has current primary IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260931   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Found IP for machine: 192.168.39.162
	I1011 22:24:22.260960   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserving static IP address...
	I1011 22:24:22.261363   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserved static IP address: 192.168.39.162
	I1011 22:24:22.261401   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.261416   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for SSH to be available...
	I1011 22:24:22.261457   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | skip adding static IP to network mk-default-k8s-diff-port-070708 - found existing host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"}
	I1011 22:24:22.261493   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Getting to WaitForSSH function...
	I1011 22:24:22.263356   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263736   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.263769   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263912   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH client type: external
	I1011 22:24:22.263936   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa (-rw-------)
	I1011 22:24:22.263959   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:22.263975   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | About to run SSH command:
	I1011 22:24:22.263991   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | exit 0
	I1011 22:24:22.391349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:22.391744   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetConfigRaw
	I1011 22:24:22.392361   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.394582   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.394953   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.394987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.395205   77741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:24:22.395391   77741 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:22.395408   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:22.395620   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.397851   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398185   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.398215   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398339   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.398517   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398671   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398810   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.398947   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.399226   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.399243   77741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:22.506891   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:22.506929   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507220   77741 buildroot.go:166] provisioning hostname "default-k8s-diff-port-070708"
	I1011 22:24:22.507252   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507437   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.510300   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510694   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.510728   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510830   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.511016   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511165   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511449   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.511588   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.511783   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.511800   77741 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-070708 && echo "default-k8s-diff-port-070708" | sudo tee /etc/hostname
	I1011 22:24:22.632639   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-070708
	
	I1011 22:24:22.632673   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.635224   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635536   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.635570   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.635881   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636018   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636166   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.636312   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.636503   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.636521   77741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-070708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-070708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-070708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:22.751402   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:22.751434   77741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:22.751490   77741 buildroot.go:174] setting up certificates
	I1011 22:24:22.751505   77741 provision.go:84] configureAuth start
	I1011 22:24:22.751522   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.751753   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.754256   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754611   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.754661   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.756857   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757175   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.757207   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757327   77741 provision.go:143] copyHostCerts
	I1011 22:24:22.757384   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:22.757405   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:22.757479   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:22.757577   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:22.757586   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:22.757607   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:22.757660   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:22.757667   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:22.757683   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:22.757738   77741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-070708 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-070708 localhost minikube]
	I1011 22:24:23.136674   77741 provision.go:177] copyRemoteCerts
	I1011 22:24:23.136726   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:23.136751   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.139576   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.139909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.139939   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.140104   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.140302   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.140446   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.140553   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.224552   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:23.248389   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1011 22:24:23.271533   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:23.294727   77741 provision.go:87] duration metric: took 543.206381ms to configureAuth
	I1011 22:24:23.294757   77741 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:23.295005   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:23.295092   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.297776   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298066   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.298102   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298225   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.298447   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298609   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298747   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.298880   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.299054   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.299068   77741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.522759   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:23.522787   77741 machine.go:96] duration metric: took 1.127384391s to provisionDockerMachine
	I1011 22:24:23.522801   77741 start.go:293] postStartSetup for "default-k8s-diff-port-070708" (driver="kvm2")
	I1011 22:24:23.522814   77741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:23.522834   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.523149   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:23.523186   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.526415   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.526905   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.526927   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.527101   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.527304   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.527442   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.527548   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.609520   77741 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:23.614158   77741 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:23.614183   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:23.614257   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:23.614349   77741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:23.614460   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:23.623839   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:23.649574   77741 start.go:296] duration metric: took 126.758615ms for postStartSetup
	I1011 22:24:23.649619   77741 fix.go:56] duration metric: took 19.006146927s for fixHost
	I1011 22:24:23.649643   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.652832   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653204   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.653234   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653439   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.653633   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653815   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.654158   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.654348   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.654362   77741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:23.763396   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685463.735816087
	
	I1011 22:24:23.763417   77741 fix.go:216] guest clock: 1728685463.735816087
	I1011 22:24:23.763435   77741 fix.go:229] Guest: 2024-10-11 22:24:23.735816087 +0000 UTC Remote: 2024-10-11 22:24:23.649624165 +0000 UTC m=+280.235201903 (delta=86.191922ms)
	I1011 22:24:23.763454   77741 fix.go:200] guest clock delta is within tolerance: 86.191922ms
	I1011 22:24:23.763459   77741 start.go:83] releasing machines lock for "default-k8s-diff-port-070708", held for 19.120018362s
	I1011 22:24:23.763483   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.763750   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:23.766956   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767357   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.767399   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767553   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768140   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768301   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768388   77741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:23.768438   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.768496   77741 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:23.768518   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.771106   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771145   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771526   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771567   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771589   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771605   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771855   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.771901   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772053   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.772102   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.772171   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772276   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.883476   77741 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:23.889434   77741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:24.036410   77741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:24.042728   77741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:24.042805   77741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:24.059112   77741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:24.059137   77741 start.go:495] detecting cgroup driver to use...
	I1011 22:24:24.059201   77741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:24.075267   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:24.088163   77741 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:24.088228   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:24.106336   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:24.123084   77741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:24.242599   77741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:24.411075   77741 docker.go:233] disabling docker service ...
	I1011 22:24:24.411159   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:24.430632   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:24.447508   77741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:24.617156   77741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:24.761101   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:24.776604   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:24.799678   77741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:24.799738   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.811501   77741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:24.811576   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.822565   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.833103   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.843670   77741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:24.855800   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.868918   77741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.886996   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.897487   77741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:24.907215   77741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:24.907263   77741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:24.920391   77741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:24.931383   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:25.065929   77741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:25.164594   77741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:25.164663   77741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:25.169492   77741 start.go:563] Will wait 60s for crictl version
	I1011 22:24:25.169540   77741 ssh_runner.go:195] Run: which crictl
	I1011 22:24:25.173355   77741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:25.220778   77741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:25.220876   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.253354   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.287095   77741 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:22.488407   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:24.988742   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:25.288116   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:25.291001   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291345   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:25.291390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291579   77741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:25.295805   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:25.308821   77741 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:25.308959   77741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:25.309019   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:25.353205   77741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:25.353271   77741 ssh_runner.go:195] Run: which lz4
	I1011 22:24:25.357765   77741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:25.362126   77741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:25.362168   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:26.741249   77741 crio.go:462] duration metric: took 1.383506027s to copy over tarball
	I1011 22:24:26.741392   77741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:27.486887   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.486911   77526 pod_ready.go:82] duration metric: took 9.007140273s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.486926   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492698   77526 pod_ready.go:93] pod "kube-proxy-xz284" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.492717   77526 pod_ready.go:82] duration metric: took 5.784843ms for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492726   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:29.499666   77526 pod_ready.go:103] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:32.137260   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:32.137292   77526 pod_ready.go:82] duration metric: took 4.644558899s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:32.137307   77526 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:28.851248   77741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1098168s)
	I1011 22:24:28.851285   77741 crio.go:469] duration metric: took 2.109983801s to extract the tarball
	I1011 22:24:28.851294   77741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:28.888408   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:28.933361   77741 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:28.933384   77741 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:28.933391   77741 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.31.1 crio true true} ...
	I1011 22:24:28.933510   77741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-070708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:28.933589   77741 ssh_runner.go:195] Run: crio config
	I1011 22:24:28.982515   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:28.982541   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:28.982554   77741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:28.982582   77741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-070708 NodeName:default-k8s-diff-port-070708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:28.982781   77741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-070708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:28.982862   77741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:28.993780   77741 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:28.993846   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:29.005252   77741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1011 22:24:29.023922   77741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:29.042177   77741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 22:24:29.059529   77741 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:29.063600   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:29.078061   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:29.204249   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:29.221115   77741 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708 for IP: 192.168.39.162
	I1011 22:24:29.221141   77741 certs.go:194] generating shared ca certs ...
	I1011 22:24:29.221161   77741 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:29.221349   77741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:29.221402   77741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:29.221413   77741 certs.go:256] generating profile certs ...
	I1011 22:24:29.221493   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/client.key
	I1011 22:24:29.221568   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key.07f8f6d8
	I1011 22:24:29.221645   77741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key
	I1011 22:24:29.221767   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:29.221803   77741 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:29.221812   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:29.221832   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:29.221853   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:29.221872   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:29.221929   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:29.222760   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:29.262636   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:29.308886   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:29.348949   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:29.378795   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 22:24:29.426593   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:29.465414   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:29.491216   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:29.518262   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:29.542270   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:29.565664   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:29.588852   77741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:29.606630   77741 ssh_runner.go:195] Run: openssl version
	I1011 22:24:29.612594   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:29.623089   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627591   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627656   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.633544   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:29.644199   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:29.654783   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661009   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661061   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.668950   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:29.684757   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:29.700687   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705578   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705646   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.711533   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:29.722714   77741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:29.727419   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:29.733494   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:29.739565   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:29.745569   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:29.751428   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:29.757368   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:29.763272   77741 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:29.763379   77741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:29.763436   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.805191   77741 cri.go:89] found id: ""
	I1011 22:24:29.805263   77741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:29.819025   77741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:29.819049   77741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:29.819098   77741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:29.828470   77741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:29.829347   77741 kubeconfig.go:125] found "default-k8s-diff-port-070708" server: "https://192.168.39.162:8444"
	I1011 22:24:29.831385   77741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:29.840601   77741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1011 22:24:29.840630   77741 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:29.840640   77741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:29.840691   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.880123   77741 cri.go:89] found id: ""
	I1011 22:24:29.880199   77741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:29.897250   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:29.908273   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:29.908293   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:29.908340   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:24:29.917052   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:29.917110   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:29.926121   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:24:29.935494   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:29.935552   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:29.944951   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.953829   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:29.953890   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.963554   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:24:29.972917   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:29.972979   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:29.981962   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:29.990859   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.116668   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.856369   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.204973   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.261641   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.313332   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:31.313450   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:31.814503   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.313812   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.813821   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.833106   77741 api_server.go:72] duration metric: took 1.519770408s to wait for apiserver process to appear ...
	I1011 22:24:32.833142   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:32.833166   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.028524   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.028557   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.028573   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.035621   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.035651   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.334128   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.339051   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.339075   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:35.833305   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.838821   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.838851   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:36.333367   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:36.338371   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:24:36.344660   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:36.344684   77741 api_server.go:131] duration metric: took 3.511533712s to wait for apiserver health ...
	I1011 22:24:36.344694   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:36.344703   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:36.346229   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:34.148281   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:36.645574   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:36.347449   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:36.361278   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:36.384091   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:36.399422   77741 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:36.399482   77741 system_pods.go:61] "coredns-7c65d6cfc9-bpv5v" [76f03ec1-b826-412f-8bb2-fcd555185dd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:36.399503   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [5f021850-47af-442e-81f9-fccf153afb5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:36.399521   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [12777485-8206-495d-9223-06574b1410a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:36.399557   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [4261e9f7-6e66-44d3-abbb-6fd541e62c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:36.399567   77741 system_pods.go:61] "kube-proxy-hsjth" [7ba3e685-be57-4e46-ac49-279bd32ca049] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:36.399575   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [1d170237-0bbe-4832-b5d2-cea7a11d5aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:36.399585   77741 system_pods.go:61] "metrics-server-6867b74b74-l7xbw" [998853a5-4215-4f3d-baa5-84e8f6bb91ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:36.399599   77741 system_pods.go:61] "storage-provisioner" [f618ffde-9d3a-43fd-999a-3855ac5de5d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:36.399612   77741 system_pods.go:74] duration metric: took 15.498192ms to wait for pod list to return data ...
	I1011 22:24:36.399627   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:36.403628   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:36.403652   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:36.403663   77741 node_conditions.go:105] duration metric: took 4.030681ms to run NodePressure ...
	I1011 22:24:36.403677   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:36.705101   77741 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710495   77741 kubeadm.go:739] kubelet initialised
	I1011 22:24:36.710514   77741 kubeadm.go:740] duration metric: took 5.389006ms waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710521   77741 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:36.715511   77741 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:39.144299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.144365   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:38.723972   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.221423   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:43.222431   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.627428   77373 start.go:364] duration metric: took 54.46189221s to acquireMachinesLock for "no-preload-390487"
	I1011 22:24:44.627494   77373 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:44.627505   77373 fix.go:54] fixHost starting: 
	I1011 22:24:44.627904   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:44.627943   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:44.647097   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I1011 22:24:44.647594   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:44.648124   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:24:44.648149   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:44.648538   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:44.648719   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:24:44.648881   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:24:44.650660   77373 fix.go:112] recreateIfNeeded on no-preload-390487: state=Stopped err=<nil>
	I1011 22:24:44.650685   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	W1011 22:24:44.650829   77373 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:44.652887   77373 out.go:177] * Restarting existing kvm2 VM for "no-preload-390487" ...
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:43.144430   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:45.645398   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.654550   77373 main.go:141] libmachine: (no-preload-390487) Calling .Start
	I1011 22:24:44.654840   77373 main.go:141] libmachine: (no-preload-390487) Ensuring networks are active...
	I1011 22:24:44.655546   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network default is active
	I1011 22:24:44.656008   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network mk-no-preload-390487 is active
	I1011 22:24:44.656383   77373 main.go:141] libmachine: (no-preload-390487) Getting domain xml...
	I1011 22:24:44.657065   77373 main.go:141] libmachine: (no-preload-390487) Creating domain...
	I1011 22:24:45.980644   77373 main.go:141] libmachine: (no-preload-390487) Waiting to get IP...
	I1011 22:24:45.981635   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:45.982101   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:45.982167   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:45.982078   79243 retry.go:31] will retry after 195.443447ms: waiting for machine to come up
	I1011 22:24:46.179539   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.179999   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.180030   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.179953   79243 retry.go:31] will retry after 322.117828ms: waiting for machine to come up
	I1011 22:24:46.503434   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.503947   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.503969   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.503915   79243 retry.go:31] will retry after 295.160677ms: waiting for machine to come up
	I1011 22:24:46.801184   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.801763   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.801797   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.801716   79243 retry.go:31] will retry after 396.903731ms: waiting for machine to come up
	I1011 22:24:47.200047   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.200515   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.200543   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.200480   79243 retry.go:31] will retry after 750.816077ms: waiting for machine to come up
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:44.735539   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:44.735561   77741 pod_ready.go:82] duration metric: took 8.020026677s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:44.735576   77741 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:46.744354   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:48.144609   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:50.149053   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:47.952867   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.953464   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.953495   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.953288   79243 retry.go:31] will retry after 639.218351ms: waiting for machine to come up
	I1011 22:24:48.594034   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:48.594428   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:48.594484   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:48.594409   79243 retry.go:31] will retry after 884.81772ms: waiting for machine to come up
	I1011 22:24:49.480960   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:49.481335   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:49.481362   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:49.481290   79243 retry.go:31] will retry after 1.298501886s: waiting for machine to come up
	I1011 22:24:50.781446   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:50.781854   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:50.781878   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:50.781800   79243 retry.go:31] will retry after 1.856156849s: waiting for machine to come up
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:49.242663   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:51.875476   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:53.411915   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.411937   77741 pod_ready.go:82] duration metric: took 8.676353233s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.411950   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418808   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.418827   77741 pod_ready.go:82] duration metric: took 6.869777ms for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418838   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428224   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.428257   77741 pod_ready.go:82] duration metric: took 9.411307ms for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428270   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438263   77741 pod_ready.go:93] pod "kube-proxy-hsjth" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.438293   77741 pod_ready.go:82] duration metric: took 10.015779ms for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438307   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444909   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.444932   77741 pod_ready.go:82] duration metric: took 6.618233ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444943   77741 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:52.646299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:55.144236   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:52.640024   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:52.640568   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:52.640600   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:52.640516   79243 retry.go:31] will retry after 1.634063154s: waiting for machine to come up
	I1011 22:24:54.275779   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:54.276278   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:54.276307   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:54.276222   79243 retry.go:31] will retry after 2.141763066s: waiting for machine to come up
	I1011 22:24:56.419913   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:56.420312   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:56.420333   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:56.420279   79243 retry.go:31] will retry after 3.322852036s: waiting for machine to come up
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.451196   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.452254   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.645338   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:00.142994   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.147083   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:59.745010   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:59.745433   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:59.745457   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:59.745377   79243 retry.go:31] will retry after 4.379442156s: waiting for machine to come up
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.953903   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.451156   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:04.127900   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has current primary IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128566   77373 main.go:141] libmachine: (no-preload-390487) Found IP for machine: 192.168.61.55
	I1011 22:25:04.128581   77373 main.go:141] libmachine: (no-preload-390487) Reserving static IP address...
	I1011 22:25:04.129112   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.129144   77373 main.go:141] libmachine: (no-preload-390487) DBG | skip adding static IP to network mk-no-preload-390487 - found existing host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"}
	I1011 22:25:04.129157   77373 main.go:141] libmachine: (no-preload-390487) Reserved static IP address: 192.168.61.55
	I1011 22:25:04.129170   77373 main.go:141] libmachine: (no-preload-390487) Waiting for SSH to be available...
	I1011 22:25:04.129179   77373 main.go:141] libmachine: (no-preload-390487) DBG | Getting to WaitForSSH function...
	I1011 22:25:04.131402   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131668   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.131698   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131864   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH client type: external
	I1011 22:25:04.131892   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa (-rw-------)
	I1011 22:25:04.131922   77373 main.go:141] libmachine: (no-preload-390487) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:25:04.131936   77373 main.go:141] libmachine: (no-preload-390487) DBG | About to run SSH command:
	I1011 22:25:04.131950   77373 main.go:141] libmachine: (no-preload-390487) DBG | exit 0
	I1011 22:25:04.258578   77373 main.go:141] libmachine: (no-preload-390487) DBG | SSH cmd err, output: <nil>: 
	I1011 22:25:04.258971   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetConfigRaw
	I1011 22:25:04.259663   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.262128   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262510   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.262542   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262838   77373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:25:04.263066   77373 machine.go:93] provisionDockerMachine start ...
	I1011 22:25:04.263088   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:04.263316   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.265560   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.265843   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.265862   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.266086   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.266277   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266448   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266597   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.266755   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.266968   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.266982   77373 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:25:04.375270   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:25:04.375306   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375541   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:25:04.375564   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375718   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.378706   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379069   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.379091   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379315   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.379515   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.380026   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.380213   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.380224   77373 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-390487 && echo "no-preload-390487" | sudo tee /etc/hostname
	I1011 22:25:04.503359   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-390487
	
	I1011 22:25:04.503392   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.506163   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506502   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.506537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506742   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.506924   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507077   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507332   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.507483   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.507660   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.507676   77373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-390487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-390487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-390487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:25:04.624804   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:25:04.624850   77373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:25:04.624880   77373 buildroot.go:174] setting up certificates
	I1011 22:25:04.624893   77373 provision.go:84] configureAuth start
	I1011 22:25:04.624909   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.625190   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.627950   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628278   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.628320   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628458   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.630686   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631012   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.631040   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631168   77373 provision.go:143] copyHostCerts
	I1011 22:25:04.631234   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:25:04.631255   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:25:04.631328   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:25:04.631438   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:25:04.631450   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:25:04.631488   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:25:04.631564   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:25:04.631575   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:25:04.631600   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:25:04.631668   77373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.no-preload-390487 san=[127.0.0.1 192.168.61.55 localhost minikube no-preload-390487]
	I1011 22:25:04.736741   77373 provision.go:177] copyRemoteCerts
	I1011 22:25:04.736802   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:25:04.736830   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.739358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739665   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.739695   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.740016   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.740156   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.740291   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:04.826024   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:25:04.851100   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:25:04.875010   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:25:04.899107   77373 provision.go:87] duration metric: took 274.198948ms to configureAuth
	I1011 22:25:04.899133   77373 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:25:04.899323   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:25:04.899405   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.901744   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902079   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.902108   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902320   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.902518   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902689   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902911   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.903095   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.903284   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.903304   77373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:25:05.129377   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:25:05.129406   77373 machine.go:96] duration metric: took 866.326736ms to provisionDockerMachine
	I1011 22:25:05.129420   77373 start.go:293] postStartSetup for "no-preload-390487" (driver="kvm2")
	I1011 22:25:05.129435   77373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:25:05.129455   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.129768   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:25:05.129798   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.132216   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132539   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.132579   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132703   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.132891   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.133039   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.133177   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.219144   77373 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:25:05.223510   77373 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:25:05.223549   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:25:05.223634   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:25:05.223728   77373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:25:05.223837   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:25:05.234069   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:05.259266   77373 start.go:296] duration metric: took 129.829951ms for postStartSetup
	I1011 22:25:05.259313   77373 fix.go:56] duration metric: took 20.631808044s for fixHost
	I1011 22:25:05.259335   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.262071   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262313   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.262340   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262493   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.262702   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.262899   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.263030   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.263243   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:05.263425   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:05.263470   77373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:25:05.367341   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685505.320713090
	
	I1011 22:25:05.367368   77373 fix.go:216] guest clock: 1728685505.320713090
	I1011 22:25:05.367378   77373 fix.go:229] Guest: 2024-10-11 22:25:05.32071309 +0000 UTC Remote: 2024-10-11 22:25:05.259318089 +0000 UTC m=+357.684959787 (delta=61.395001ms)
	I1011 22:25:05.367397   77373 fix.go:200] guest clock delta is within tolerance: 61.395001ms
	I1011 22:25:05.367409   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 20.739943225s
	I1011 22:25:05.367428   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.367673   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:05.370276   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370611   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.370648   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370815   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371423   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371608   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371674   77373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:25:05.371726   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.371914   77373 ssh_runner.go:195] Run: cat /version.json
	I1011 22:25:05.371939   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.374358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374730   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.374764   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374794   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374915   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375073   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375227   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375232   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.375256   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.375342   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.375449   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375560   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375714   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375819   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.482886   77373 ssh_runner.go:195] Run: systemctl --version
	I1011 22:25:05.489351   77373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:25:05.643786   77373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:25:05.650229   77373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:25:05.650296   77373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:25:05.666494   77373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:25:05.666522   77373 start.go:495] detecting cgroup driver to use...
	I1011 22:25:05.666582   77373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:25:05.683659   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:25:05.697066   77373 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:25:05.697119   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:25:05.712780   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:25:05.728824   77373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:25:05.844693   77373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:25:06.021006   77373 docker.go:233] disabling docker service ...
	I1011 22:25:06.021064   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:25:06.035844   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:25:06.049585   77373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:25:06.194294   77373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:25:06.333778   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:25:06.349522   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:25:06.370214   77373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:25:06.370285   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.380680   77373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:25:06.380751   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.390974   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.402539   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.414129   77373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:25:06.425521   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.435647   77373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.453454   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.463564   77373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:25:06.473487   77373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:25:06.473560   77373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:25:06.487972   77373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:25:06.498579   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:06.626975   77373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:25:06.736608   77373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:25:06.736681   77373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:25:06.742858   77373 start.go:563] Will wait 60s for crictl version
	I1011 22:25:06.742916   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:06.746699   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:25:06.785073   77373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:25:06.785172   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.812373   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.842453   77373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:25:04.645257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.143877   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.843849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:06.846526   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.846822   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:06.846870   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.847073   77373 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:25:06.851361   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:06.864316   77373 kubeadm.go:883] updating cluster {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:25:06.864426   77373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:25:06.864455   77373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:25:06.904225   77373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:25:06.904253   77373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:25:06.904307   77373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:06.904342   77373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.904360   77373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.904376   77373 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.904363   77373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.904475   77373 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.904499   77373 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 22:25:06.904480   77373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.905783   77373 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.905694   77373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.905680   77373 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.905686   77373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:07.057329   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.060095   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.080674   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1011 22:25:07.081598   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.085905   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.097740   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.106415   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.136780   77373 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1011 22:25:07.136834   77373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.136888   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.152692   77373 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1011 22:25:07.152730   77373 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.152784   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341838   77373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1011 22:25:07.341882   77373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.341890   77373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1011 22:25:07.341916   77373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.341929   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341947   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341973   77373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1011 22:25:07.341998   77373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1011 22:25:07.342007   77373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.342041   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.342014   77373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.342058   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.342053   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.342099   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.355230   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.355409   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.439441   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.439572   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.444043   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.444071   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.578269   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.578424   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.580474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.580516   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.580535   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.580606   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.451555   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.951138   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:09.144842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:11.643405   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.697848   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 22:25:07.697957   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.697984   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.722151   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1011 22:25:07.722269   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:07.734336   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 22:25:07.734449   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:07.734475   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.734489   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1011 22:25:07.734500   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 22:25:07.734508   77373 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734541   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734578   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:07.788345   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 22:25:07.788371   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1011 22:25:07.788446   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:07.816070   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1011 22:25:07.816308   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 22:25:07.816394   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:08.066781   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.943666   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.209065908s)
	I1011 22:25:09.943709   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1011 22:25:09.943750   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.20918304s)
	I1011 22:25:09.943771   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1011 22:25:09.943779   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.155317638s)
	I1011 22:25:09.943793   77373 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943796   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1011 22:25:09.943829   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.127421611s)
	I1011 22:25:09.943841   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943848   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1011 22:25:09.943878   77373 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.877065002s)
	I1011 22:25:09.943925   77373 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 22:25:09.943968   77373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.944013   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.951973   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:10.953032   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.644601   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.645917   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.641438   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.697578704s)
	I1011 22:25:13.641519   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1011 22:25:13.641523   77373 ssh_runner.go:235] Completed: which crictl: (3.697489585s)
	I1011 22:25:13.641556   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641597   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641598   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534810   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.893187916s)
	I1011 22:25:15.534865   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1011 22:25:15.534893   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.893219513s)
	I1011 22:25:15.534963   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534898   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:15.535027   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.452229   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.951490   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.952280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:18.143828   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:20.144712   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.707389   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.172401078s)
	I1011 22:25:17.707420   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.172369128s)
	I1011 22:25:17.707443   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1011 22:25:17.707474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:17.707476   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:17.707644   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:19.168147   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460475389s)
	I1011 22:25:19.168190   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1011 22:25:19.168156   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.460655676s)
	I1011 22:25:19.168221   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168242   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 22:25:19.168276   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168336   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.123906   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.955605804s)
	I1011 22:25:21.123945   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1011 22:25:21.123991   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.955631476s)
	I1011 22:25:21.124019   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1011 22:25:21.124030   77373 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.124068   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.773002   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 22:25:21.773050   77373 cache_images.go:123] Successfully loaded all cached images
	I1011 22:25:21.773057   77373 cache_images.go:92] duration metric: took 14.868794284s to LoadCachedImages
	I1011 22:25:21.773074   77373 kubeadm.go:934] updating node { 192.168.61.55 8443 v1.31.1 crio true true} ...
	I1011 22:25:21.773185   77373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-390487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:25:21.773265   77373 ssh_runner.go:195] Run: crio config
	I1011 22:25:21.821268   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:21.821291   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:21.821301   77373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:25:21.821321   77373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-390487 NodeName:no-preload-390487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:25:21.821490   77373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-390487"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:25:21.821564   77373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:25:21.832830   77373 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:25:21.832905   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:25:21.842726   77373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1011 22:25:21.859739   77373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:25:21.876192   77373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1011 22:25:21.893366   77373 ssh_runner.go:195] Run: grep 192.168.61.55	control-plane.minikube.internal$ /etc/hosts
	I1011 22:25:21.897435   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:21.909840   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:22.021697   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:25:22.039163   77373 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487 for IP: 192.168.61.55
	I1011 22:25:22.039187   77373 certs.go:194] generating shared ca certs ...
	I1011 22:25:22.039207   77373 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:25:22.039385   77373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:25:22.039449   77373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:25:22.039462   77373 certs.go:256] generating profile certs ...
	I1011 22:25:22.039587   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/client.key
	I1011 22:25:22.039668   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key.6a466d38
	I1011 22:25:22.039713   77373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key
	I1011 22:25:22.039858   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:25:22.039901   77373 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:25:22.039912   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:25:22.039959   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:25:22.040001   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:25:22.040029   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:25:22.040089   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:22.040914   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:25:22.077604   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:25:22.133879   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:25:22.164886   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:25:22.197655   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:25:22.229594   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:25:22.264506   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:25:22.287571   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:25:22.310555   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:25:22.333333   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:25:22.356094   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:25:22.380156   77373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:25:22.398056   77373 ssh_runner.go:195] Run: openssl version
	I1011 22:25:22.403799   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:25:22.415645   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420352   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420411   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.426457   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:25:22.438182   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:25:22.449704   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454778   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454840   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.460601   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:25:22.472587   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:25:22.485096   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489673   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489729   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.495547   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:25:22.507652   77373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:25:22.513081   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:25:22.519287   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:25:22.525159   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:25:22.531170   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:25:22.537321   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:25:22.543093   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:25:22.548832   77373 kubeadm.go:392] StartCluster: {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:25:22.548926   77373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:25:22.548972   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.594269   77373 cri.go:89] found id: ""
	I1011 22:25:22.594341   77373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:25:22.604950   77373 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:25:22.604976   77373 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:25:22.605025   77373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.452376   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.950987   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.644866   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:25.143773   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.144243   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.615035   77373 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:25:22.615951   77373 kubeconfig.go:125] found "no-preload-390487" server: "https://192.168.61.55:8443"
	I1011 22:25:22.618000   77373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:25:22.628327   77373 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.55
	I1011 22:25:22.628367   77373 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:25:22.628379   77373 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:25:22.628426   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.681709   77373 cri.go:89] found id: ""
	I1011 22:25:22.681769   77373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:25:22.697989   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:25:22.707772   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:25:22.707792   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:25:22.707838   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:25:22.716928   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:25:22.716984   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:25:22.726327   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:25:22.735769   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:25:22.735819   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:25:22.745468   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.754493   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:25:22.754552   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.764062   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:25:22.773234   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:25:22.773298   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:25:22.782913   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:25:22.792119   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:22.910184   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:23.868070   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.095326   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.164924   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.251769   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:25:24.251852   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.752110   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.252591   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.278468   77373 api_server.go:72] duration metric: took 1.026698113s to wait for apiserver process to appear ...
	I1011 22:25:25.278498   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:25:25.278521   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:25.278974   77373 api_server.go:269] stopped: https://192.168.61.55:8443/healthz: Get "https://192.168.61.55:8443/healthz": dial tcp 192.168.61.55:8443: connect: connection refused
	I1011 22:25:25.778778   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.951896   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.451534   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.012373   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.012412   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.012437   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.099444   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.099503   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.278723   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.284616   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.284647   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:28.779287   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.786100   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.786125   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:29.278680   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:29.285168   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:25:29.291497   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:25:29.291526   77373 api_server.go:131] duration metric: took 4.013020818s to wait for apiserver health ...
	I1011 22:25:29.291537   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:29.291545   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:29.293325   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:25:29.644410   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:32.144466   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:29.294582   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:25:29.306107   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:25:29.331655   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:25:29.346931   77373 system_pods.go:59] 8 kube-system pods found
	I1011 22:25:29.346973   77373 system_pods.go:61] "coredns-7c65d6cfc9-5z4p5" [a369ddfd-01d5-4d2a-a63b-ab36b26f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:25:29.346986   77373 system_pods.go:61] "etcd-no-preload-390487" [b9aa7965-9be2-43b4-a291-246e5f27fa00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:25:29.346998   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [17e9a39a-2084-4504-8f9c-602cad87536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:25:29.347004   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [c4dc9017-6062-444e-b11f-23762dc5ef3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:25:29.347010   77373 system_pods.go:61] "kube-proxy-82p2c" [555091e0-b40d-49a6-a964-80baf143c001] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:25:29.347029   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [dcfc8186-23f5-4744-93f8-080180f93be6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:25:29.347034   77373 system_pods.go:61] "metrics-server-6867b74b74-tk8fq" [8fb649e0-2af0-4655-8251-356873e2213e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:25:29.347041   77373 system_pods.go:61] "storage-provisioner" [a01f8ac1-6d29-4885-86a7-c7ef0c289b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:25:29.347047   77373 system_pods.go:74] duration metric: took 15.369022ms to wait for pod list to return data ...
	I1011 22:25:29.347055   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:25:29.352543   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:25:29.352576   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:25:29.352590   77373 node_conditions.go:105] duration metric: took 5.52943ms to run NodePressure ...
	I1011 22:25:29.352613   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:29.648681   77373 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652653   77373 kubeadm.go:739] kubelet initialised
	I1011 22:25:29.652671   77373 kubeadm.go:740] duration metric: took 3.972281ms waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652679   77373 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:25:29.658454   77373 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.663740   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663768   77373 pod_ready.go:82] duration metric: took 5.289381ms for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.663780   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663791   77373 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.668667   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668693   77373 pod_ready.go:82] duration metric: took 4.892171ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.668704   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668714   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.673134   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673157   77373 pod_ready.go:82] duration metric: took 4.432292ms for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.673168   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673177   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.734940   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734965   77373 pod_ready.go:82] duration metric: took 61.774649ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.734974   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734980   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134816   77373 pod_ready.go:93] pod "kube-proxy-82p2c" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:30.134843   77373 pod_ready.go:82] duration metric: took 399.851043ms for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134856   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:32.143137   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.451926   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:31.452961   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.145457   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.643721   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.143610   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.641435   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.951339   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:35.952408   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.450537   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.644336   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.144815   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.642041   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.143153   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.641922   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:41.641949   77373 pod_ready.go:82] duration metric: took 11.507084936s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:41.641962   77373 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.451326   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:42.451670   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.643191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.643486   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.648037   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.648090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.451907   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:46.950763   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.144086   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.144203   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.144498   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:47.649490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.148831   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.148997   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.451637   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:51.952434   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.643929   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.645968   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.149692   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.649774   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:54.456563   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.952097   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.143734   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.146350   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.148063   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.148608   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:59.451436   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.452136   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.643807   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.644664   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.149089   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.152410   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:26:03.952386   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.952562   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.645291   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.145051   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.146419   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.650259   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.149242   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.149684   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.452508   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.952101   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.953125   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.645067   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.646842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.149723   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.649874   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:15.451781   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:17.951419   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.144547   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.145544   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.151415   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.649239   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:20.450507   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.450680   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:23.643586   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.144617   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:24.149159   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.649041   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:24.451584   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.451685   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.643757   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.143786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.650003   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.148367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:28.952410   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.450990   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.644354   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.648376   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:33.951428   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.951901   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.143140   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.144224   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.649082   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:39.650580   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.148945   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.451431   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.951642   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.951753   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.644548   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.144055   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.649705   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.148731   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.952282   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.451649   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.643384   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:52.143369   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.150143   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.648664   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:49.951912   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.955275   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.144438   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.145153   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.149232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.648245   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.451964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.952084   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:58.643527   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:00.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.148916   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.452217   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.951461   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:03.143621   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:05.644225   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:04.148533   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.149378   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:03.952598   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.451802   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:07.645531   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.144141   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.144739   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:08.648935   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.649185   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:08.951257   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.951915   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:13.451012   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:14.643844   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:16.643951   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.651766   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.148176   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.148231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:15.452119   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.951679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.144439   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.644786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.647581   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.649450   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:19.952158   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.952425   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.143206   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.143587   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.148823   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:27:24.452366   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.950804   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:28.143916   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:30.644830   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:29.149277   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.648385   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:28.952135   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.452143   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.143604   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:35.149383   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.649481   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.148545   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:33.951885   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.450920   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:37.643707   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.644162   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.143479   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:38.649090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:41.148009   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:38.451524   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:40.954861   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:44.143555   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:46.143976   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:43.149295   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.648165   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.451177   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.451493   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.951335   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.145191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.643798   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.648963   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.150518   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.951782   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.451312   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:53.143194   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.143896   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.144275   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.647882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:54.648946   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:56.649832   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:54.951866   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.450974   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.144335   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.144508   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.148539   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.652535   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:59.452019   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.951319   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:03.643904   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.142780   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.149856   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:03.951539   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:05.955152   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.450679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.143240   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.144659   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.649122   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:11.148403   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.451221   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.952631   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.644135   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.143192   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.144402   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.148449   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.648534   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:15.450809   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.451351   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.644591   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.144568   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:18.147818   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:20.150891   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.951122   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:21.951323   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.643512   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:27.143639   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.648755   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.648851   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:24.450975   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.451800   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:29.147583   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.644088   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:32.137388   77526 pod_ready.go:82] duration metric: took 4m0.000065444s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:32.137437   77526 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:32.137454   77526 pod_ready.go:39] duration metric: took 4m13.67950194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:32.137478   77526 kubeadm.go:597] duration metric: took 4m21.517496572s to restartPrimaryControlPlane
	W1011 22:28:32.137532   77526 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:32.137562   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:29.150291   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.649055   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:28.951749   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:30.952578   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.149312   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:36.648952   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:33.451778   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:35.951964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:39.148016   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:41.149360   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.454387   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:40.952061   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:43.649642   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:46.148528   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:43.451280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:45.952227   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.451044   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.149006   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.649552   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.451478   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:52.951299   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:53.445808   77741 pod_ready.go:82] duration metric: took 4m0.000846456s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:53.445846   77741 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:53.445869   77741 pod_ready.go:39] duration metric: took 4m16.735338637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:53.445899   77741 kubeadm.go:597] duration metric: took 4m23.626843864s to restartPrimaryControlPlane
	W1011 22:28:53.445964   77741 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:53.445996   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:53.148309   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:55.149231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.243748   77526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.106148594s)
	I1011 22:28:58.243837   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:58.263915   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:58.281349   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:58.297636   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:58.297661   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:58.297710   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:58.311371   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:58.311444   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:58.330584   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:58.350348   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:58.350403   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:58.376417   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.390350   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:58.390399   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.404955   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:58.416263   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:58.416322   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:58.425942   77526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:58.478782   77526 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:28:58.478835   77526 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:58.590185   77526 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:58.590333   77526 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:58.590451   77526 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:28:58.598371   77526 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:58.600253   77526 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:58.600357   77526 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:58.600458   77526 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:58.600569   77526 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:58.600657   77526 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:58.600761   77526 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:58.600827   77526 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:58.600913   77526 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:58.601018   77526 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:58.601122   77526 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:58.601250   77526 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:58.601335   77526 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:58.601417   77526 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:58.951248   77526 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:59.187453   77526 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:28:59.496055   77526 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:59.583363   77526 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:59.747699   77526 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:59.748339   77526 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:59.750963   77526 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:59.752710   77526 out.go:235]   - Booting up control plane ...
	I1011 22:28:59.752858   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:59.752956   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:59.753174   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:59.770682   77526 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:59.776919   77526 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:59.776989   77526 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:59.900964   77526 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:28:59.901122   77526 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:00.402400   77526 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.862362ms
	I1011 22:29:00.402529   77526 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:28:57.648367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:00.148371   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:02.153536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:05.905921   77526 kubeadm.go:310] [api-check] The API server is healthy after 5.501955207s
	I1011 22:29:05.918054   77526 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:05.936720   77526 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:05.982293   77526 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:05.982571   77526 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-223942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:06.007168   77526 kubeadm.go:310] [bootstrap-token] Using token: a4lu2p.4yfrrazoy97j5yu0
	I1011 22:29:06.008642   77526 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:06.008749   77526 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:06.020393   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:06.032191   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:06.039269   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:06.043990   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:06.053648   77526 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:06.312388   77526 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:06.740160   77526 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:07.315305   77526 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:07.317697   77526 kubeadm.go:310] 
	I1011 22:29:07.317793   77526 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:07.317806   77526 kubeadm.go:310] 
	I1011 22:29:07.317929   77526 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:07.317950   77526 kubeadm.go:310] 
	I1011 22:29:07.318009   77526 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:07.318126   77526 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:07.318222   77526 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:07.318232   77526 kubeadm.go:310] 
	I1011 22:29:07.318281   77526 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:07.318289   77526 kubeadm.go:310] 
	I1011 22:29:07.318339   77526 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:07.318350   77526 kubeadm.go:310] 
	I1011 22:29:07.318424   77526 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:07.318528   77526 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:07.318630   77526 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:07.318644   77526 kubeadm.go:310] 
	I1011 22:29:07.318750   77526 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:07.318823   77526 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:07.318830   77526 kubeadm.go:310] 
	I1011 22:29:07.318913   77526 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319086   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:07.319124   77526 kubeadm.go:310] 	--control-plane 
	I1011 22:29:07.319133   77526 kubeadm.go:310] 
	I1011 22:29:07.319256   77526 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:07.319264   77526 kubeadm.go:310] 
	I1011 22:29:07.319366   77526 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319505   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:07.321368   77526 kubeadm.go:310] W1011 22:28:58.449635    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321691   77526 kubeadm.go:310] W1011 22:28:58.450407    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321866   77526 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:07.321888   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:29:07.321899   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:07.323580   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:07.324762   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:07.335614   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:04.648441   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:06.648506   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:07.354851   77526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:07.355473   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:07.355479   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-223942 minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=embed-certs-223942 minikube.k8s.io/primary=true
	I1011 22:29:07.397703   77526 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:07.581167   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.081395   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.582200   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.081862   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.581361   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.081246   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.581754   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.081988   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.179021   77526 kubeadm.go:1113] duration metric: took 3.82416989s to wait for elevateKubeSystemPrivileges
	I1011 22:29:11.179061   77526 kubeadm.go:394] duration metric: took 5m0.606049956s to StartCluster
	I1011 22:29:11.179086   77526 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.179171   77526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:11.181572   77526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.181873   77526 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:11.181938   77526 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:11.182035   77526 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223942"
	I1011 22:29:11.182059   77526 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223942"
	I1011 22:29:11.182060   77526 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223942"
	W1011 22:29:11.182070   77526 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:11.182078   77526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223942"
	I1011 22:29:11.182102   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182114   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:11.182091   77526 addons.go:69] Setting metrics-server=true in profile "embed-certs-223942"
	I1011 22:29:11.182147   77526 addons.go:234] Setting addon metrics-server=true in "embed-certs-223942"
	W1011 22:29:11.182161   77526 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:11.182196   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182515   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182558   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182579   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182692   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.183573   77526 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:11.184930   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:11.198456   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1011 22:29:11.198666   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I1011 22:29:11.199044   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199141   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199592   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199607   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199726   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199744   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199950   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200104   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200248   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.200557   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.200608   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.201637   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1011 22:29:11.202066   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.202541   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.202560   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.202894   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.203434   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.203474   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.204227   77526 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223942"
	W1011 22:29:11.204249   77526 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:11.204281   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.204663   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.204707   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.218765   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I1011 22:29:11.218894   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I1011 22:29:11.219238   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219244   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219747   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219772   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.219949   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219970   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.220019   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220167   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220232   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220785   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220847   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1011 22:29:11.221152   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.221591   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.221614   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.222116   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.222135   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222401   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222916   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.222955   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.224006   77526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:11.224007   77526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:11.225424   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:11.225455   77526 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:11.225474   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.226095   77526 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.226115   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:11.226131   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.228914   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229448   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.229472   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229542   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229584   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.229744   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230021   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.230025   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230037   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.230118   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.230496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.230648   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230781   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230897   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.238742   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1011 22:29:11.239211   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.239762   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.239786   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.240061   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.240238   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.241740   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.241967   77526 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.241986   77526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:11.242007   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.244886   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245237   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.245260   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245501   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.245684   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.245882   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.246052   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.365926   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:11.391766   77526 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401923   77526 node_ready.go:49] node "embed-certs-223942" has status "Ready":"True"
	I1011 22:29:11.401943   77526 node_ready.go:38] duration metric: took 10.139287ms for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401952   77526 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:11.406561   77526 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:11.460959   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:11.460992   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:11.475600   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.481436   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:11.481465   77526 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:11.515478   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.515500   77526 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:11.558164   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.569398   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.795782   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.795805   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796093   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:11.796119   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796137   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.796152   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.796163   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796373   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796389   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809155   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.809176   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.809439   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.809457   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809463   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475441   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475469   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.475720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475769   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.475789   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.475805   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475815   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.476016   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.476027   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.476031   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.476041   77526 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223942"
	I1011 22:29:12.503190   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503219   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503530   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503574   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.503588   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503598   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503834   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503850   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.505379   77526 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1011 22:29:09.149809   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:11.650232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:12.506382   77526 addons.go:510] duration metric: took 1.324453305s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1011 22:29:13.412840   77526 pod_ready.go:103] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:13.918905   77526 pod_ready.go:93] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:13.918926   77526 pod_ready.go:82] duration metric: took 2.512345346s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:13.918936   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:15.925307   77526 pod_ready.go:103] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:14.149051   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:16.649622   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:17.925327   77526 pod_ready.go:93] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.925353   77526 pod_ready.go:82] duration metric: took 4.006410198s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.925366   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929846   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.929872   77526 pod_ready.go:82] duration metric: took 4.495642ms for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929883   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933635   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.933652   77526 pod_ready.go:82] duration metric: took 3.761139ms for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933661   77526 pod_ready.go:39] duration metric: took 6.531698315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:17.933677   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:17.933732   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:17.950153   77526 api_server.go:72] duration metric: took 6.768243331s to wait for apiserver process to appear ...
	I1011 22:29:17.950174   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:17.950192   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:29:17.953743   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:29:17.954586   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:17.954610   77526 api_server.go:131] duration metric: took 4.428307ms to wait for apiserver health ...
	I1011 22:29:17.954629   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:17.959411   77526 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:17.959432   77526 system_pods.go:61] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.959438   77526 system_pods.go:61] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.959443   77526 system_pods.go:61] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.959447   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.959451   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.959454   77526 system_pods.go:61] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.959457   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.959468   77526 system_pods.go:61] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.959473   77526 system_pods.go:61] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.959480   77526 system_pods.go:74] duration metric: took 4.84106ms to wait for pod list to return data ...
	I1011 22:29:17.959488   77526 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:17.962273   77526 default_sa.go:45] found service account: "default"
	I1011 22:29:17.962294   77526 default_sa.go:55] duration metric: took 2.80012ms for default service account to be created ...
	I1011 22:29:17.962302   77526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:17.966653   77526 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:17.966675   77526 system_pods.go:89] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.966681   77526 system_pods.go:89] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.966686   77526 system_pods.go:89] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.966691   77526 system_pods.go:89] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.966695   77526 system_pods.go:89] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.966698   77526 system_pods.go:89] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.966702   77526 system_pods.go:89] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.966741   77526 system_pods.go:89] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.966751   77526 system_pods.go:89] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.966759   77526 system_pods.go:126] duration metric: took 4.452902ms to wait for k8s-apps to be running ...
	I1011 22:29:17.966766   77526 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:17.966807   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:17.982751   77526 system_svc.go:56] duration metric: took 15.979158ms WaitForService to wait for kubelet
	I1011 22:29:17.982770   77526 kubeadm.go:582] duration metric: took 6.800865436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:17.982788   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:17.985340   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:17.985361   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:17.985373   77526 node_conditions.go:105] duration metric: took 2.578879ms to run NodePressure ...
	I1011 22:29:17.985385   77526 start.go:241] waiting for startup goroutines ...
	I1011 22:29:17.985398   77526 start.go:246] waiting for cluster config update ...
	I1011 22:29:17.985415   77526 start.go:255] writing updated cluster config ...
	I1011 22:29:17.985668   77526 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:18.034091   77526 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:18.036159   77526 out.go:177] * Done! kubectl is now configured to use "embed-certs-223942" cluster and "default" namespace by default
	I1011 22:29:19.671974   77741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225955809s)
	I1011 22:29:19.672048   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:19.689229   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:29:19.701141   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:29:19.714596   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:29:19.714630   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:29:19.714674   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:29:19.729207   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:29:19.729273   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:29:19.739052   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:29:19.748101   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:29:19.748162   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:29:19.757518   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.766689   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:29:19.766754   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.776197   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:29:19.785329   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:29:19.785381   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:29:19.794742   77741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:29:19.837814   77741 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:29:19.837936   77741 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:29:19.956401   77741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:29:19.956502   77741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:29:19.956574   77741 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:29:19.965603   77741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:29:19.967637   77741 out.go:235]   - Generating certificates and keys ...
	I1011 22:29:19.967726   77741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:29:19.967793   77741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:29:19.967875   77741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:29:19.967965   77741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:29:19.968066   77741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:29:19.968139   77741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:29:19.968224   77741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:29:19.968319   77741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:29:19.968435   77741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:29:19.968545   77741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:29:19.968608   77741 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:29:19.968701   77741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:29:20.266256   77741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:29:20.353124   77741 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:29:20.693912   77741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:29:20.814227   77741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:29:21.028714   77741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:29:21.029382   77741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:29:21.032065   77741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:29:19.149346   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.648583   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.033900   77741 out.go:235]   - Booting up control plane ...
	I1011 22:29:21.034020   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:29:21.034134   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:29:21.034236   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:29:21.053259   77741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:29:21.060157   77741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:29:21.060229   77741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:29:21.190140   77741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:29:21.190325   77741 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:21.691954   77741 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78398ms
	I1011 22:29:21.692069   77741 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:29:26.696518   77741 kubeadm.go:310] [api-check] The API server is healthy after 5.002229227s
	I1011 22:29:26.710581   77741 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:26.726686   77741 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:26.759596   77741 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:26.759894   77741 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-070708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:26.769529   77741 kubeadm.go:310] [bootstrap-token] Using token: dhosfn.441jcramrxgiydi4
	I1011 22:29:24.149380   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.647490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.770660   77741 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:26.770801   77741 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:26.775859   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:26.783572   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:26.789736   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:26.793026   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:26.797814   77741 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:27.102055   77741 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:27.537636   77741 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:28.102099   77741 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:28.103130   77741 kubeadm.go:310] 
	I1011 22:29:28.103241   77741 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:28.103264   77741 kubeadm.go:310] 
	I1011 22:29:28.103371   77741 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:28.103379   77741 kubeadm.go:310] 
	I1011 22:29:28.103400   77741 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:28.103454   77741 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:28.103506   77741 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:28.103510   77741 kubeadm.go:310] 
	I1011 22:29:28.103565   77741 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:28.103569   77741 kubeadm.go:310] 
	I1011 22:29:28.103618   77741 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:28.103624   77741 kubeadm.go:310] 
	I1011 22:29:28.103666   77741 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:28.103778   77741 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:28.103874   77741 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:28.103882   77741 kubeadm.go:310] 
	I1011 22:29:28.103960   77741 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:28.104023   77741 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:28.104029   77741 kubeadm.go:310] 
	I1011 22:29:28.104096   77741 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104179   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:28.104199   77741 kubeadm.go:310] 	--control-plane 
	I1011 22:29:28.104205   77741 kubeadm.go:310] 
	I1011 22:29:28.104271   77741 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:28.104277   77741 kubeadm.go:310] 
	I1011 22:29:28.104384   77741 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104513   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:28.105322   77741 kubeadm.go:310] W1011 22:29:19.811300    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105623   77741 kubeadm.go:310] W1011 22:29:19.812133    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105772   77741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:28.105796   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:29:28.105808   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:28.107671   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:28.108911   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:28.121190   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:28.143442   77741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:28.143523   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.143537   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-070708 minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=default-k8s-diff-port-070708 minikube.k8s.io/primary=true
	I1011 22:29:28.380171   77741 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:28.380244   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.649448   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:31.147882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:28.880541   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.380686   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.880953   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.381236   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.880946   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.380516   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.880841   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.380874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.880874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.969809   77741 kubeadm.go:1113] duration metric: took 4.826361525s to wait for elevateKubeSystemPrivileges
	I1011 22:29:32.969844   77741 kubeadm.go:394] duration metric: took 5m3.206576288s to StartCluster
	I1011 22:29:32.969864   77741 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.969949   77741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:32.972053   77741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.972321   77741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:32.972419   77741 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:32.972545   77741 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972564   77741 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972572   77741 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:32.972580   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:32.972577   77741 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972601   77741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-070708"
	I1011 22:29:32.972590   77741 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972621   77741 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972631   77741 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:32.972676   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972605   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972952   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.972982   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973051   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973088   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973111   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973143   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973995   77741 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:32.975387   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:32.989010   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1011 22:29:32.989449   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.989866   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I1011 22:29:32.990100   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990127   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.990213   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.990478   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.990668   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990692   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.991068   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991071   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.991110   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1011 22:29:32.991671   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991703   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991966   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.992453   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.992486   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.992808   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.992950   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:32.995986   77741 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.996004   77741 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:32.996031   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.996271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.996311   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.010650   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I1011 22:29:33.010949   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1011 22:29:33.011111   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011350   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I1011 22:29:33.011490   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.011509   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.011838   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011936   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012113   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.012272   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012283   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.012338   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.012663   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012877   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012897   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.013271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:33.013307   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.013511   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.013691   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.014538   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.015400   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.016387   77741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:33.017187   77741 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:33.018090   77741 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.018111   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:33.018130   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.018972   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:33.018994   77741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:33.019015   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.021827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022205   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.022226   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.022513   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.022704   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.022865   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.023070   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023552   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.023574   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.024067   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.024222   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.024376   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.030089   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I1011 22:29:33.030477   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.030929   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.030954   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.031352   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.031571   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.033098   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.033335   77741 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.033351   77741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:33.033366   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.036390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.036758   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.036780   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.037025   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.037173   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.037322   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.037467   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.201955   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:33.220870   77741 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229595   77741 node_ready.go:49] node "default-k8s-diff-port-070708" has status "Ready":"True"
	I1011 22:29:33.229615   77741 node_ready.go:38] duration metric: took 8.713422ms for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229623   77741 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:33.237626   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:33.298146   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:33.298166   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:33.308268   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.320862   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.346501   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:33.346536   77741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:33.406404   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.406435   77741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:33.480527   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.629133   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629162   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.629545   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.629564   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.629565   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.629616   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629625   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.630896   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.630904   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.630918   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.636620   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.636640   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.636979   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.636989   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.637001   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305476   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305507   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.305773   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.305798   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305809   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305821   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.306123   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.306168   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.306128   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.756210   77741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.275647241s)
	I1011 22:29:34.756257   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756271   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756536   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756558   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756567   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756575   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756844   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756891   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756911   77741 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-070708"
	I1011 22:29:34.756872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.759057   77741 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1011 22:29:33.148846   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:35.649536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:34.760328   77741 addons.go:510] duration metric: took 1.787917365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1011 22:29:34.764676   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:34.764703   77741 pod_ready.go:82] duration metric: took 1.527054334s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:34.764716   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773717   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.773739   77741 pod_ready.go:82] duration metric: took 1.009014594s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773747   77741 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779537   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.779554   77741 pod_ready.go:82] duration metric: took 5.801388ms for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779562   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785272   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:36.785302   77741 pod_ready.go:82] duration metric: took 1.005732291s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785316   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:38.790774   77741 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.790257   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.790285   77741 pod_ready.go:82] duration metric: took 4.004960127s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.790298   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794434   77741 pod_ready.go:93] pod "kube-proxy-f5jxp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.794457   77741 pod_ready.go:82] duration metric: took 4.15174ms for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794468   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797928   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.797942   77741 pod_ready.go:82] duration metric: took 3.468527ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797949   77741 pod_ready.go:39] duration metric: took 7.568316879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:40.797960   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:40.798002   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:40.813652   77741 api_server.go:72] duration metric: took 7.841294422s to wait for apiserver process to appear ...
	I1011 22:29:40.813672   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:40.813689   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:29:40.817412   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:29:40.818090   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:40.818107   77741 api_server.go:131] duration metric: took 4.42852ms to wait for apiserver health ...
	I1011 22:29:40.818114   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:40.823188   77741 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:40.823213   77741 system_pods.go:61] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:40.823221   77741 system_pods.go:61] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:40.823227   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:40.823233   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:40.823248   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:40.823255   77741 system_pods.go:61] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:40.823263   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:40.823273   77741 system_pods.go:61] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:40.823284   77741 system_pods.go:61] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:40.823296   77741 system_pods.go:74] duration metric: took 5.17626ms to wait for pod list to return data ...
	I1011 22:29:40.823307   77741 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:40.825321   77741 default_sa.go:45] found service account: "default"
	I1011 22:29:40.825336   77741 default_sa.go:55] duration metric: took 2.021143ms for default service account to be created ...
	I1011 22:29:40.825342   77741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:41.026940   77741 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:41.026968   77741 system_pods.go:89] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:41.026973   77741 system_pods.go:89] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:41.026978   77741 system_pods.go:89] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:41.026982   77741 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:41.026985   77741 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:41.026989   77741 system_pods.go:89] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:41.026992   77741 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:41.026998   77741 system_pods.go:89] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:41.027001   77741 system_pods.go:89] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:41.027009   77741 system_pods.go:126] duration metric: took 201.663243ms to wait for k8s-apps to be running ...
	I1011 22:29:41.027026   77741 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:41.027069   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:41.042219   77741 system_svc.go:56] duration metric: took 15.183864ms WaitForService to wait for kubelet
	I1011 22:29:41.042245   77741 kubeadm.go:582] duration metric: took 8.069890136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:41.042260   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:41.224020   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:41.224044   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:41.224057   77741 node_conditions.go:105] duration metric: took 181.791827ms to run NodePressure ...
	I1011 22:29:41.224070   77741 start.go:241] waiting for startup goroutines ...
	I1011 22:29:41.224078   77741 start.go:246] waiting for cluster config update ...
	I1011 22:29:41.224091   77741 start.go:255] writing updated cluster config ...
	I1011 22:29:41.224324   77741 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:41.270922   77741 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:41.272826   77741 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-070708" cluster and "default" namespace by default
	I1011 22:29:38.149579   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.648994   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:41.642042   77373 pod_ready.go:82] duration metric: took 4m0.000063385s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	E1011 22:29:41.642084   77373 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 22:29:41.642099   77373 pod_ready.go:39] duration metric: took 4m11.989411916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:41.642124   77373 kubeadm.go:597] duration metric: took 4m19.037142189s to restartPrimaryControlPlane
	W1011 22:29:41.642171   77373 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:29:41.642194   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:08.331378   77373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.689152762s)
	I1011 22:30:08.331467   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:08.348300   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:30:08.359480   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:08.370317   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:08.370344   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:08.370400   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:08.381317   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:08.381392   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:08.392591   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:08.403628   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:08.403695   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:08.415304   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.425512   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:08.425585   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.436525   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:08.447575   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:08.447644   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:08.458910   77373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:08.507988   77373 kubeadm.go:310] W1011 22:30:08.465544    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.508469   77373 kubeadm.go:310] W1011 22:30:08.466388    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.640893   77373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:16.843613   77373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:30:16.843665   77373 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:16.843739   77373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:16.843849   77373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:16.843963   77373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:30:16.844020   77373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:16.845663   77373 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:16.845745   77373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:16.845804   77373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:16.845880   77373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:16.845929   77373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:16.845994   77373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:16.846041   77373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:16.846094   77373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:16.846145   77373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:16.846207   77373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:16.846272   77373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:16.846305   77373 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:16.846355   77373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:16.846402   77373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:16.846453   77373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:30:16.846503   77373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:16.846566   77373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:16.846663   77373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:16.846762   77373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:16.846845   77373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:16.848425   77373 out.go:235]   - Booting up control plane ...
	I1011 22:30:16.848538   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:16.848673   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:16.848787   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:16.848925   77373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:16.849039   77373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:16.849076   77373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:16.849210   77373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:30:16.849351   77373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:30:16.849437   77373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.393174ms
	I1011 22:30:16.849498   77373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:30:16.849550   77373 kubeadm.go:310] [api-check] The API server is healthy after 5.001429588s
	I1011 22:30:16.849648   77373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:30:16.849781   77373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:30:16.849869   77373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:30:16.850052   77373 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-390487 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:30:16.850110   77373 kubeadm.go:310] [bootstrap-token] Using token: fihl2i.d50idwk2axnrw24u
	I1011 22:30:16.851665   77373 out.go:235]   - Configuring RBAC rules ...
	I1011 22:30:16.851802   77373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:30:16.851885   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:30:16.852036   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:30:16.852185   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:30:16.852323   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:30:16.852402   77373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:30:16.852499   77373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:30:16.852541   77373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:30:16.852580   77373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:30:16.852586   77373 kubeadm.go:310] 
	I1011 22:30:16.852634   77373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:30:16.852640   77373 kubeadm.go:310] 
	I1011 22:30:16.852705   77373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:30:16.852711   77373 kubeadm.go:310] 
	I1011 22:30:16.852732   77373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:30:16.852805   77373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:30:16.852878   77373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:30:16.852891   77373 kubeadm.go:310] 
	I1011 22:30:16.852990   77373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:30:16.853005   77373 kubeadm.go:310] 
	I1011 22:30:16.853073   77373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:30:16.853086   77373 kubeadm.go:310] 
	I1011 22:30:16.853162   77373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:30:16.853282   77373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:30:16.853341   77373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:30:16.853347   77373 kubeadm.go:310] 
	I1011 22:30:16.853424   77373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:30:16.853529   77373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:30:16.853540   77373 kubeadm.go:310] 
	I1011 22:30:16.853643   77373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.853789   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:30:16.853824   77373 kubeadm.go:310] 	--control-plane 
	I1011 22:30:16.853832   77373 kubeadm.go:310] 
	I1011 22:30:16.853954   77373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:30:16.853964   77373 kubeadm.go:310] 
	I1011 22:30:16.854083   77373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.854248   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:30:16.854264   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:30:16.854273   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:30:16.855848   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:30:16.857089   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:30:16.868823   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:30:16.895913   77373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:30:16.896017   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:16.896028   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-390487 minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=no-preload-390487 minikube.k8s.io/primary=true
	I1011 22:30:16.918531   77373 ops.go:34] apiserver oom_adj: -16
	I1011 22:30:17.097050   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:17.598029   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:18.098092   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:18.597526   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.098157   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.597575   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.097754   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.597957   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.097558   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.213123   77373 kubeadm.go:1113] duration metric: took 4.317171517s to wait for elevateKubeSystemPrivileges
	I1011 22:30:21.213168   77373 kubeadm.go:394] duration metric: took 4m58.664336163s to StartCluster
	I1011 22:30:21.213191   77373 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.213283   77373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:30:21.215630   77373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.215852   77373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:30:21.215989   77373 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:30:21.216063   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:30:21.216088   77373 addons.go:69] Setting storage-provisioner=true in profile "no-preload-390487"
	I1011 22:30:21.216109   77373 addons.go:234] Setting addon storage-provisioner=true in "no-preload-390487"
	I1011 22:30:21.216102   77373 addons.go:69] Setting default-storageclass=true in profile "no-preload-390487"
	W1011 22:30:21.216118   77373 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:30:21.216128   77373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-390487"
	I1011 22:30:21.216131   77373 addons.go:69] Setting metrics-server=true in profile "no-preload-390487"
	I1011 22:30:21.216171   77373 addons.go:234] Setting addon metrics-server=true in "no-preload-390487"
	W1011 22:30:21.216182   77373 addons.go:243] addon metrics-server should already be in state true
	I1011 22:30:21.216218   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216149   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216627   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216644   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216662   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216737   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.217280   77373 out.go:177] * Verifying Kubernetes components...
	I1011 22:30:21.218773   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:30:21.232485   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I1011 22:30:21.232801   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1011 22:30:21.233029   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233243   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233615   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233642   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233762   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233785   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233966   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234065   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234485   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234520   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.234611   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234669   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.235151   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1011 22:30:21.235614   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.236082   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.236106   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.236479   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.236777   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.240463   77373 addons.go:234] Setting addon default-storageclass=true in "no-preload-390487"
	W1011 22:30:21.240483   77373 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:30:21.240512   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.240874   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.240916   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.250949   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I1011 22:30:21.251469   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.251958   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.251983   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.252397   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.252586   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.253093   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1011 22:30:21.253443   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.253949   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.253966   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.254413   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.254479   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.254605   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.256241   77373 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:30:21.256246   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.257646   77373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:30:21.257651   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:30:21.257712   77373 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:30:21.257736   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.258740   77373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.258761   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:30:21.258779   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.259764   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1011 22:30:21.260129   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.260673   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.260697   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.261024   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.261691   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.261902   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.261949   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.262376   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.262401   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262655   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262698   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.262901   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263233   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.263339   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.263345   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263511   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.263523   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.263700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263807   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263942   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.302779   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1011 22:30:21.303319   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.303864   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.303888   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.304289   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.304516   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.306544   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.306810   77373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.306829   77373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:30:21.306852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.309788   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310242   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.310268   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310466   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.310646   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.310786   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.310911   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.439567   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:30:21.477421   77373 node_ready.go:35] waiting up to 6m0s for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.539701   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.544312   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.548001   77373 node_ready.go:49] node "no-preload-390487" has status "Ready":"True"
	I1011 22:30:21.548022   77373 node_ready.go:38] duration metric: took 70.568638ms for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.548032   77373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:21.576393   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:21.585171   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:30:21.585197   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:30:21.681671   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:30:21.681698   77373 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:30:21.725963   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:21.725988   77373 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:30:21.759564   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:22.490072   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490099   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490219   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490236   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490470   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490494   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490504   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490512   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490596   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490596   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490627   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490642   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490653   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490883   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490899   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490922   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490981   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490996   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.491008   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.509939   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.509972   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.510355   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.510371   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.510421   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:23.029621   77373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.270011552s)
	I1011 22:30:23.029675   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.029691   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.029972   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.029989   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.029999   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.030008   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.030228   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.030242   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.030253   77373 addons.go:475] Verifying addon metrics-server=true in "no-preload-390487"
	I1011 22:30:23.031821   77373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1011 22:30:23.033206   77373 addons.go:510] duration metric: took 1.817229636s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1011 22:30:23.583317   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.583341   77373 pod_ready.go:82] duration metric: took 2.006915507s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.583350   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588077   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.588094   77373 pod_ready.go:82] duration metric: took 4.738751ms for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588103   77373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592411   77373 pod_ready.go:93] pod "etcd-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.592429   77373 pod_ready.go:82] duration metric: took 4.320594ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592437   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:25.599226   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:28.107173   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:29.598395   77373 pod_ready.go:93] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.598422   77373 pod_ready.go:82] duration metric: took 6.005976584s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.598438   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603104   77373 pod_ready.go:93] pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.603123   77373 pod_ready.go:82] duration metric: took 4.67757ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603133   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606558   77373 pod_ready.go:93] pod "kube-proxy-4g8nw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.606574   77373 pod_ready.go:82] duration metric: took 3.433207ms for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606582   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610559   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.610575   77373 pod_ready.go:82] duration metric: took 3.985639ms for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610582   77373 pod_ready.go:39] duration metric: took 8.062539556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:29.610598   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:30:29.610667   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:30:29.625884   77373 api_server.go:72] duration metric: took 8.409998013s to wait for apiserver process to appear ...
	I1011 22:30:29.625906   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:30:29.625925   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:30:29.629905   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:30:29.631557   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:30:29.631575   77373 api_server.go:131] duration metric: took 5.661997ms to wait for apiserver health ...
	I1011 22:30:29.631583   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:30:29.637936   77373 system_pods.go:59] 9 kube-system pods found
	I1011 22:30:29.637963   77373 system_pods.go:61] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.637970   77373 system_pods.go:61] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.637974   77373 system_pods.go:61] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.637979   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.637984   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.637989   77373 system_pods.go:61] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.637997   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.638010   77373 system_pods.go:61] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.638018   77373 system_pods.go:61] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.638027   77373 system_pods.go:74] duration metric: took 6.437989ms to wait for pod list to return data ...
	I1011 22:30:29.638034   77373 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:30:29.640483   77373 default_sa.go:45] found service account: "default"
	I1011 22:30:29.640499   77373 default_sa.go:55] duration metric: took 2.455351ms for default service account to be created ...
	I1011 22:30:29.640508   77373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:30:29.800014   77373 system_pods.go:86] 9 kube-system pods found
	I1011 22:30:29.800043   77373 system_pods.go:89] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.800049   77373 system_pods.go:89] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.800053   77373 system_pods.go:89] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.800057   77373 system_pods.go:89] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.800060   77373 system_pods.go:89] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.800064   77373 system_pods.go:89] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.800069   77373 system_pods.go:89] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.800074   77373 system_pods.go:89] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.800078   77373 system_pods.go:89] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.800086   77373 system_pods.go:126] duration metric: took 159.572896ms to wait for k8s-apps to be running ...
	I1011 22:30:29.800093   77373 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:30:29.800138   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:29.815064   77373 system_svc.go:56] duration metric: took 14.962996ms WaitForService to wait for kubelet
	I1011 22:30:29.815090   77373 kubeadm.go:582] duration metric: took 8.599206932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:30:29.815106   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:30:29.997185   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:30:29.997214   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:30:29.997224   77373 node_conditions.go:105] duration metric: took 182.114064ms to run NodePressure ...
	I1011 22:30:29.997235   77373 start.go:241] waiting for startup goroutines ...
	I1011 22:30:29.997242   77373 start.go:246] waiting for cluster config update ...
	I1011 22:30:29.997254   77373 start.go:255] writing updated cluster config ...
	I1011 22:30:29.997529   77373 ssh_runner.go:195] Run: rm -f paused
	I1011 22:30:30.044917   77373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:30:30.046918   77373 out.go:177] * Done! kubectl is now configured to use "no-preload-390487" cluster and "default" namespace by default
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 
	
	
	==> CRI-O <==
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.139271108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686323139251824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdacffe5-1a6c-4b24-acd6-2c7c1e5e708a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.139794263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a2fed42-20c0-4713-89da-02484cbe67aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.139848726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a2fed42-20c0-4713-89da-02484cbe67aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.140035005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a2fed42-20c0-4713-89da-02484cbe67aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.176090360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1889ac9-5564-41ec-91e6-7b5edcdc13bd name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.176160116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1889ac9-5564-41ec-91e6-7b5edcdc13bd name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.177314351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=486360ad-723f-43ea-9b3d-76ff134dcb32 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.177753472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686323177730154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=486360ad-723f-43ea-9b3d-76ff134dcb32 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.178182106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e53b5ad-f2b8-497a-ba08-004ef378fa05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.178233738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e53b5ad-f2b8-497a-ba08-004ef378fa05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.178457382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e53b5ad-f2b8-497a-ba08-004ef378fa05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.219713638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31d28043-818a-4aa3-abba-bd24dcd571ce name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.219782482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31d28043-818a-4aa3-abba-bd24dcd571ce name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.220964364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59a4d4d2-2c79-478b-a365-d6d77beef088 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.221381216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686323221362030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59a4d4d2-2c79-478b-a365-d6d77beef088 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.222137115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b7cebbb-2bde-410f-8217-e04e90dcf35a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.222187379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b7cebbb-2bde-410f-8217-e04e90dcf35a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.222381685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b7cebbb-2bde-410f-8217-e04e90dcf35a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.253900431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e18b5756-4edc-48e7-906d-04a097a8b8e0 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.253968064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e18b5756-4edc-48e7-906d-04a097a8b8e0 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.254984846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d88a0a07-c52b-4349-9ec7-b7d2cfee3859 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.255368296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686323255349768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d88a0a07-c52b-4349-9ec7-b7d2cfee3859 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.256018976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23576482-15f2-4040-a537-9f9f211400cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.256172617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23576482-15f2-4040-a537-9f9f211400cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:38:43 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:38:43.256363508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23576482-15f2-4040-a537-9f9f211400cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2229a9f091011       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6e73d99a53a98       storage-provisioner
	93d5a4bb1b104       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   9a451a513699b       coredns-7c65d6cfc9-gtw9g
	da6e7a92c7137       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   d0f860f9ed006       coredns-7c65d6cfc9-zvctp
	571b0b8905a01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   603dad3b36fb5       kube-proxy-f5jxp
	f483651efa722       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   67d24d1da7949       kube-scheduler-default-k8s-diff-port-070708
	bf663831c4154       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   85663564de4b6       etcd-default-k8s-diff-port-070708
	01d96e49d1dce       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   99d604ac06aa9       kube-apiserver-default-k8s-diff-port-070708
	be779ed72e098       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   ff6670dfc012a       kube-controller-manager-default-k8s-diff-port-070708
	ec5a10bd3c273       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   b90c8b0b224e5       kube-apiserver-default-k8s-diff-port-070708
	
	
	==> coredns [93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-070708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-070708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=default-k8s-diff-port-070708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-070708
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:38:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:34:44 +0000   Fri, 11 Oct 2024 22:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:34:44 +0000   Fri, 11 Oct 2024 22:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:34:44 +0000   Fri, 11 Oct 2024 22:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:34:44 +0000   Fri, 11 Oct 2024 22:29:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    default-k8s-diff-port-070708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2fe1e0181f7643ee9e66948960752b8c
	  System UUID:                2fe1e018-1f76-43ee-9e66-948960752b8c
	  Boot ID:                    c2f120d1-1329-4de0-90a6-c86e11e687ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gtw9g                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-zvctp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-default-k8s-diff-port-070708                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-070708             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-070708    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-f5jxp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-070708             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-88h5g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node default-k8s-diff-port-070708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node default-k8s-diff-port-070708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node default-k8s-diff-port-070708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node default-k8s-diff-port-070708 event: Registered Node default-k8s-diff-port-070708 in Controller
	
	
	==> dmesg <==
	[  +0.050534] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041244] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.995654] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471180] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568262] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.765860] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.056443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060883] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.225352] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.163783] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.317884] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.121044] systemd-fstab-generator[796]: Ignoring "noauto" option for root device
	[  +1.987558] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +0.069092] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.121770] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.646545] kauditd_printk_skb: 65 callbacks suppressed
	[Oct11 22:29] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.454119] systemd-fstab-generator[2571]: Ignoring "noauto" option for root device
	[  +4.429206] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.639402] systemd-fstab-generator[2896]: Ignoring "noauto" option for root device
	[  +5.891278] systemd-fstab-generator[3033]: Ignoring "noauto" option for root device
	[  +0.121159] kauditd_printk_skb: 14 callbacks suppressed
	[Oct11 22:30] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9] <==
	{"level":"info","ts":"2024-10-11T22:29:22.580862Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T22:29:22.581139Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"95e2e907d4f1ad16","initial-advertise-peer-urls":["https://192.168.39.162:2380"],"listen-peer-urls":["https://192.168.39.162:2380"],"advertise-client-urls":["https://192.168.39.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:29:22.581177Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:29:22.581265Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-10-11T22:29:22.581290Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-10-11T22:29:23.286562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-11T22:29:23.286675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-11T22:29:23.286727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgPreVoteResp from 95e2e907d4f1ad16 at term 1"}
	{"level":"info","ts":"2024-10-11T22:29:23.286761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:23.286785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgVoteResp from 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:23.286820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became leader at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:23.286846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95e2e907d4f1ad16 elected leader 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-10-11T22:29:23.291721Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"95e2e907d4f1ad16","local-member-attributes":"{Name:default-k8s-diff-port-070708 ClientURLs:[https://192.168.39.162:2379]}","request-path":"/0/members/95e2e907d4f1ad16/attributes","cluster-id":"da8895e0fc3a6493","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:29:23.291881Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:29:23.292255Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.292402Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:29:23.292660Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:23.294529Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:23.295230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:23.298029Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	{"level":"info","ts":"2024-10-11T22:29:23.300591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da8895e0fc3a6493","local-member-id":"95e2e907d4f1ad16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.300684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.300719Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.302836Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:23.305718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:38:43 up 14 min,  0 users,  load average: 0.28, 0.28, 0.20
	Linux default-k8s-diff-port-070708 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144] <==
	W1011 22:34:26.124473       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:34:26.124674       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:34:26.125463       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:34:26.126653       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:35:26.126665       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:35:26.126912       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:35:26.127002       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:35:26.127069       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:35:26.128997       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:35:26.129035       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:37:26.129205       1 handler_proxy.go:99] no RequestInfo found in the context
	W1011 22:37:26.129230       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:37:26.129645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1011 22:37:26.129719       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:37:26.130810       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:37:26.130871       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd] <==
	W1011 22:29:18.414388       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.440143       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.482966       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.513771       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.535574       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.574131       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.661843       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.706893       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.722583       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.735169       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.751741       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.818667       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.881094       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.881178       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.926294       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.974665       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.005461       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.073604       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.206269       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.214292       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.218692       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.259731       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.293055       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.459042       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.529738       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d] <==
	E1011 22:33:32.020828       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:33:32.579423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:34:02.027260       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:34:02.586368       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:34:32.033729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:34:32.593879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:34:44.833665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-070708"
	E1011 22:35:02.039601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:35:02.601673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:35:27.453824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="392.848µs"
	E1011 22:35:32.046589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:35:32.614440       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:35:40.448731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="73.478µs"
	E1011 22:36:02.054654       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:36:02.622173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:36:32.061028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:36:32.631723       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:37:02.067631       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:37:02.640351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:37:32.075429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:37:32.654835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:38:02.082289       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:38:02.662864       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:38:32.089749       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:38:32.670260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:29:34.789830       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:29:34.821623       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E1011 22:29:34.822349       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:29:34.879079       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:29:34.879108       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:29:34.879132       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:29:34.883660       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:29:34.884447       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:29:34.884714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:29:34.889209       1 config.go:199] "Starting service config controller"
	I1011 22:29:34.889278       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:29:34.889386       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:29:34.889531       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:29:34.890475       1 config.go:328] "Starting node config controller"
	I1011 22:29:34.890573       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:29:34.989703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 22:29:34.989847       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:29:34.991217       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1] <==
	W1011 22:29:25.171547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:25.171583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:25.171830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:25.171870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:25.172577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:29:25.172686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.014574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:29:26.014634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.046338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:29:26.046392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.156420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:26.156607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.172806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 22:29:26.172875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:26.177038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.250894       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:29:26.251056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 22:29:26.313292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:29:26.313421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.321899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:26.321948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.347247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 22:29:26.347374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1011 22:29:29.449162       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:37:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:37.611571    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686257611212140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:37.611614    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686257611212140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:39 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:39.434008    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:37:47 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:47.613332    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686267612979625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:47 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:47.613745    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686267612979625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:51 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:51.431217    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:37:57 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:57.614872    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686277614444508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:37:57 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:37:57.614896    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686277614444508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:05 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:05.431409    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:38:07 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:07.617052    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686287616448991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:07 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:07.617384    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686287616448991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:16 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:16.431747    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:38:17 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:17.619700    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686297619249989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:17 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:17.619753    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686297619249989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:27.451775    2903 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:27.621786    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686307621427560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:27 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:27.621814    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686307621427560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:28 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:28.431640    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:38:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:37.624566    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686317624009583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:37.624748    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686317624009583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:42 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:38:42.433564    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	
	
	==> storage-provisioner [2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef] <==
	I1011 22:29:35.131578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:29:35.141149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:29:35.141222       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:29:35.154171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:29:35.156404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-070708_81dcbbba-3a73-4ebe-bb37-8a3888fb1be2!
	I1011 22:29:35.160701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b0b144f-f2cf-474f-8ace-c1a4f70bedd9", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-070708_81dcbbba-3a73-4ebe-bb37-8a3888fb1be2 became leader
	I1011 22:29:35.257277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-070708_81dcbbba-3a73-4ebe-bb37-8a3888fb1be2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-88h5g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 describe pod metrics-server-6867b74b74-88h5g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-070708 describe pod metrics-server-6867b74b74-88h5g: exit status 1 (64.233376ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-88h5g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-070708 describe pod metrics-server-6867b74b74-88h5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1011 22:31:55.953445   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:31:57.615556   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:32:06.383283   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-390487 -n no-preload-390487
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-11 22:39:30.569101482 +0000 UTC m=+6090.583458992
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-390487 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-390487 logs -n 25: (1.920805057s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo cat                              | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:20:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:39.118864   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:20:45.198865   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:48.270849   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:54.350871   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:57.422868   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:03.502801   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:06.574950   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:12.654900   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:15.726940   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:21.806892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:24.878947   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:30.958903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:34.030961   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:40.110909   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:43.182869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:49.262857   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:52.334903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:58.414892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:01.486914   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:07.566885   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:10.638888   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:16.718908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:19.790874   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:25.870893   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:28.942886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:35.022875   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:38.094889   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:44.174898   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:47.246907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:53.326869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:56.398883   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:02.482839   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:05.550858   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:11.630908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:14.702895   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:20.782925   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:23.854907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:29.934886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:33.006820   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:39.086906   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:42.158938   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:45.162974   77526 start.go:364] duration metric: took 4m27.722613931s to acquireMachinesLock for "embed-certs-223942"
	I1011 22:23:45.163058   77526 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:23:45.163081   77526 fix.go:54] fixHost starting: 
	I1011 22:23:45.163410   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:23:45.163459   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:23:45.178675   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1011 22:23:45.179157   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:23:45.179600   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:23:45.179620   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:23:45.179959   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:23:45.180200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:23:45.180348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:23:45.182134   77526 fix.go:112] recreateIfNeeded on embed-certs-223942: state=Stopped err=<nil>
	I1011 22:23:45.182159   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	W1011 22:23:45.182305   77526 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:23:45.184160   77526 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223942" ...
	I1011 22:23:45.185640   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Start
	I1011 22:23:45.185844   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring networks are active...
	I1011 22:23:45.186700   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network default is active
	I1011 22:23:45.187125   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network mk-embed-certs-223942 is active
	I1011 22:23:45.187499   77526 main.go:141] libmachine: (embed-certs-223942) Getting domain xml...
	I1011 22:23:45.188220   77526 main.go:141] libmachine: (embed-certs-223942) Creating domain...
	I1011 22:23:46.400681   77526 main.go:141] libmachine: (embed-certs-223942) Waiting to get IP...
	I1011 22:23:46.401694   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.402146   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.402226   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.402142   78768 retry.go:31] will retry after 262.164449ms: waiting for machine to come up
	I1011 22:23:46.665716   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.666177   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.666204   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.666139   78768 retry.go:31] will retry after 264.99316ms: waiting for machine to come up
	I1011 22:23:46.932771   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.933128   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.933167   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.933084   78768 retry.go:31] will retry after 388.243159ms: waiting for machine to come up
	I1011 22:23:47.322648   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.323103   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.323165   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.323047   78768 retry.go:31] will retry after 374.999199ms: waiting for machine to come up
	I1011 22:23:45.160618   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:23:45.160654   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.160935   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:23:45.160960   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.161145   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:23:45.162838   77373 machine.go:96] duration metric: took 4m37.426000052s to provisionDockerMachine
	I1011 22:23:45.162876   77373 fix.go:56] duration metric: took 4m37.446804874s for fixHost
	I1011 22:23:45.162886   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 4m37.446840276s
	W1011 22:23:45.162906   77373 start.go:714] error starting host: provision: host is not running
	W1011 22:23:45.163008   77373 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1011 22:23:45.163018   77373 start.go:729] Will try again in 5 seconds ...
	I1011 22:23:47.699684   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.700088   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.700117   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.700031   78768 retry.go:31] will retry after 589.703952ms: waiting for machine to come up
	I1011 22:23:48.291928   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.292398   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.292422   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.292351   78768 retry.go:31] will retry after 671.971303ms: waiting for machine to come up
	I1011 22:23:48.966357   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.966772   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.966797   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.966738   78768 retry.go:31] will retry after 848.2726ms: waiting for machine to come up
	I1011 22:23:49.816735   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:49.817155   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:49.817181   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:49.817116   78768 retry.go:31] will retry after 941.163438ms: waiting for machine to come up
	I1011 22:23:50.759625   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:50.760052   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:50.760095   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:50.759996   78768 retry.go:31] will retry after 1.225047114s: waiting for machine to come up
	I1011 22:23:51.987349   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:51.987788   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:51.987817   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:51.987737   78768 retry.go:31] will retry after 2.184212352s: waiting for machine to come up
	I1011 22:23:50.165493   77373 start.go:360] acquireMachinesLock for no-preload-390487: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:23:54.173125   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:54.173564   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:54.173595   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:54.173503   78768 retry.go:31] will retry after 2.000096312s: waiting for machine to come up
	I1011 22:23:56.176004   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:56.176458   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:56.176488   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:56.176403   78768 retry.go:31] will retry after 3.062345768s: waiting for machine to come up
	I1011 22:23:59.239982   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:59.240426   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:59.240452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:59.240386   78768 retry.go:31] will retry after 4.019746049s: waiting for machine to come up
	I1011 22:24:04.643399   77741 start.go:364] duration metric: took 4m21.087318573s to acquireMachinesLock for "default-k8s-diff-port-070708"
	I1011 22:24:04.643463   77741 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:04.643471   77741 fix.go:54] fixHost starting: 
	I1011 22:24:04.643903   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:04.643950   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:04.660647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1011 22:24:04.661106   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:04.661603   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:24:04.661627   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:04.661966   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:04.662148   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:04.662392   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:24:04.664004   77741 fix.go:112] recreateIfNeeded on default-k8s-diff-port-070708: state=Stopped err=<nil>
	I1011 22:24:04.664048   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	W1011 22:24:04.664205   77741 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:04.666462   77741 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-070708" ...
	I1011 22:24:03.263908   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264434   77526 main.go:141] libmachine: (embed-certs-223942) Found IP for machine: 192.168.72.238
	I1011 22:24:03.264467   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has current primary IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264476   77526 main.go:141] libmachine: (embed-certs-223942) Reserving static IP address...
	I1011 22:24:03.264932   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.264964   77526 main.go:141] libmachine: (embed-certs-223942) Reserved static IP address: 192.168.72.238
	I1011 22:24:03.264984   77526 main.go:141] libmachine: (embed-certs-223942) DBG | skip adding static IP to network mk-embed-certs-223942 - found existing host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"}
	I1011 22:24:03.264995   77526 main.go:141] libmachine: (embed-certs-223942) Waiting for SSH to be available...
	I1011 22:24:03.265018   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Getting to WaitForSSH function...
	I1011 22:24:03.267171   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267556   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.267594   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267682   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH client type: external
	I1011 22:24:03.267720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa (-rw-------)
	I1011 22:24:03.267747   77526 main.go:141] libmachine: (embed-certs-223942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:03.267760   77526 main.go:141] libmachine: (embed-certs-223942) DBG | About to run SSH command:
	I1011 22:24:03.267767   77526 main.go:141] libmachine: (embed-certs-223942) DBG | exit 0
	I1011 22:24:03.390641   77526 main.go:141] libmachine: (embed-certs-223942) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:03.390996   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetConfigRaw
	I1011 22:24:03.391600   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.393909   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394224   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.394267   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394510   77526 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:24:03.394735   77526 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:03.394754   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:03.394941   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.396974   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397280   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.397298   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397414   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.397577   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397724   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397856   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.398095   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.398276   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.398285   77526 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:03.503029   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:03.503063   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503282   77526 buildroot.go:166] provisioning hostname "embed-certs-223942"
	I1011 22:24:03.503301   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503503   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.505943   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506300   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.506325   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506444   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.506595   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506769   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506899   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.507087   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.507247   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.507261   77526 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223942 && echo "embed-certs-223942" | sudo tee /etc/hostname
	I1011 22:24:03.626937   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223942
	
	I1011 22:24:03.626970   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.629752   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630038   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.630067   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630194   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.630370   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630665   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.630805   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.630988   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.631011   77526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223942/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:03.744196   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:03.744224   77526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:03.744247   77526 buildroot.go:174] setting up certificates
	I1011 22:24:03.744258   77526 provision.go:84] configureAuth start
	I1011 22:24:03.744270   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.744535   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.747114   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.747479   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747619   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.750238   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750626   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.750662   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750801   77526 provision.go:143] copyHostCerts
	I1011 22:24:03.750867   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:03.750890   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:03.750970   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:03.751094   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:03.751108   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:03.751146   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:03.751246   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:03.751257   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:03.751288   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:03.751360   77526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223942 san=[127.0.0.1 192.168.72.238 embed-certs-223942 localhost minikube]
	I1011 22:24:04.039983   77526 provision.go:177] copyRemoteCerts
	I1011 22:24:04.040046   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:04.040072   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.042846   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043130   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.043151   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043339   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.043530   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.043689   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.043836   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.124533   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:04.148503   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:24:04.172199   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:04.195175   77526 provision.go:87] duration metric: took 450.888581ms to configureAuth
	I1011 22:24:04.195203   77526 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:04.195381   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:04.195446   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.197839   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198189   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.198269   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.198561   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198730   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198875   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.199041   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.199217   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.199237   77526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:04.411621   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:04.411653   77526 machine.go:96] duration metric: took 1.016905055s to provisionDockerMachine
	I1011 22:24:04.411667   77526 start.go:293] postStartSetup for "embed-certs-223942" (driver="kvm2")
	I1011 22:24:04.411680   77526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:04.411699   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.411977   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:04.412003   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.414381   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414679   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.414722   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414835   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.415010   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.415144   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.415266   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.496916   77526 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:04.500935   77526 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:04.500956   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:04.501023   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:04.501115   77526 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:04.501222   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:04.510266   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:04.537636   77526 start.go:296] duration metric: took 125.956397ms for postStartSetup
	I1011 22:24:04.537678   77526 fix.go:56] duration metric: took 19.374596283s for fixHost
	I1011 22:24:04.537698   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.540344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540719   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.540742   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540838   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.541012   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541160   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541316   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.541474   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.541648   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.541659   77526 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:04.643243   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685444.617606783
	
	I1011 22:24:04.643266   77526 fix.go:216] guest clock: 1728685444.617606783
	I1011 22:24:04.643273   77526 fix.go:229] Guest: 2024-10-11 22:24:04.617606783 +0000 UTC Remote: 2024-10-11 22:24:04.537682618 +0000 UTC m=+287.234553168 (delta=79.924165ms)
	I1011 22:24:04.643312   77526 fix.go:200] guest clock delta is within tolerance: 79.924165ms
	I1011 22:24:04.643320   77526 start.go:83] releasing machines lock for "embed-certs-223942", held for 19.480305529s
	I1011 22:24:04.643344   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.643569   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:04.646344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646733   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.646766   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646918   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647366   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647519   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647644   77526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:04.647693   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.647723   77526 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:04.647748   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.649992   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650329   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650354   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650378   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650509   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.650676   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.650750   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650773   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650857   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.650959   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.651027   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.651081   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.651200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.651313   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.756500   77526 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:04.762420   77526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:04.901155   77526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:04.908234   77526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:04.908304   77526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:04.929972   77526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:04.929999   77526 start.go:495] detecting cgroup driver to use...
	I1011 22:24:04.930069   77526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:04.946899   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:04.960670   77526 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:04.960739   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:04.973981   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:04.987444   77526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:05.103114   77526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:05.251587   77526 docker.go:233] disabling docker service ...
	I1011 22:24:05.251662   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:05.266087   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:05.279209   77526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:05.431467   77526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:05.571151   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:05.584813   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:05.603563   77526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:05.603632   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.614924   77526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:05.614979   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.627625   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.638259   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.651521   77526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:05.663937   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.674307   77526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.696935   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.707464   77526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:05.717338   77526 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:05.717416   77526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:05.737811   77526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:05.749453   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:05.888144   77526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:05.984321   77526 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:05.984382   77526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:05.989389   77526 start.go:563] Will wait 60s for crictl version
	I1011 22:24:05.989447   77526 ssh_runner.go:195] Run: which crictl
	I1011 22:24:05.993333   77526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:06.033281   77526 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:06.033366   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.062164   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.092927   77526 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:06.094094   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:06.097442   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.097893   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:06.097941   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.098179   77526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:06.102566   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:06.116183   77526 kubeadm.go:883] updating cluster {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:06.116297   77526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:06.116347   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:06.164193   77526 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:06.164272   77526 ssh_runner.go:195] Run: which lz4
	I1011 22:24:06.168557   77526 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:06.173131   77526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:06.173165   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:04.667909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Start
	I1011 22:24:04.668056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring networks are active...
	I1011 22:24:04.668688   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network default is active
	I1011 22:24:04.668985   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network mk-default-k8s-diff-port-070708 is active
	I1011 22:24:04.669312   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Getting domain xml...
	I1011 22:24:04.669964   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Creating domain...
	I1011 22:24:05.931094   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting to get IP...
	I1011 22:24:05.932142   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932635   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932711   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:05.932622   78901 retry.go:31] will retry after 199.659438ms: waiting for machine to come up
	I1011 22:24:06.134036   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134479   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134504   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.134439   78901 retry.go:31] will retry after 379.083732ms: waiting for machine to come up
	I1011 22:24:06.515118   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515656   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515686   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.515599   78901 retry.go:31] will retry after 302.733712ms: waiting for machine to come up
	I1011 22:24:06.820188   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820629   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820657   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.820579   78901 retry.go:31] will retry after 466.059846ms: waiting for machine to come up
	I1011 22:24:07.288837   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289371   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.289302   78901 retry.go:31] will retry after 551.760501ms: waiting for machine to come up
	I1011 22:24:07.843026   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843561   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843590   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.843517   78901 retry.go:31] will retry after 626.896356ms: waiting for machine to come up
	I1011 22:24:07.621882   77526 crio.go:462] duration metric: took 1.453355137s to copy over tarball
	I1011 22:24:07.621973   77526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:09.732789   77526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110786947s)
	I1011 22:24:09.732823   77526 crio.go:469] duration metric: took 2.110914695s to extract the tarball
	I1011 22:24:09.732831   77526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:09.768649   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:09.811856   77526 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:09.811881   77526 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:09.811890   77526 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I1011 22:24:09.811991   77526 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:09.812087   77526 ssh_runner.go:195] Run: crio config
	I1011 22:24:09.857847   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:09.857869   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:09.857877   77526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:09.857896   77526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223942 NodeName:embed-certs-223942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:09.858025   77526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:09.858082   77526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:09.868276   77526 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:09.868346   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:09.877682   77526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1011 22:24:09.894551   77526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:09.911181   77526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1011 22:24:09.927972   77526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:09.931799   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:09.943650   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:10.071890   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:10.089627   77526 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942 for IP: 192.168.72.238
	I1011 22:24:10.089658   77526 certs.go:194] generating shared ca certs ...
	I1011 22:24:10.089680   77526 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:10.089851   77526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:10.089905   77526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:10.089916   77526 certs.go:256] generating profile certs ...
	I1011 22:24:10.090038   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/client.key
	I1011 22:24:10.090121   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key.0dabc30d
	I1011 22:24:10.090163   77526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key
	I1011 22:24:10.090323   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:10.090354   77526 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:10.090364   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:10.090392   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:10.090415   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:10.090438   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:10.090476   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:10.091225   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:10.117879   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:10.169586   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:10.210385   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:10.245240   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:24:10.274354   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:10.299943   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:10.324265   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:10.347352   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:10.370252   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:10.393715   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:10.420103   77526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:10.436668   77526 ssh_runner.go:195] Run: openssl version
	I1011 22:24:10.442525   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:10.453055   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457461   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457520   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.463121   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:10.473623   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:10.483653   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488022   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488075   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.493553   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:10.503833   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:10.514171   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518935   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518983   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.524479   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:10.534942   77526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:10.539385   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:10.545178   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:10.550886   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:10.556533   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:10.562024   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:10.567514   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:10.573018   77526 kubeadm.go:392] StartCluster: {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:10.573136   77526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:10.573206   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.609216   77526 cri.go:89] found id: ""
	I1011 22:24:10.609291   77526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:10.619945   77526 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:10.619976   77526 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:10.620024   77526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:10.629748   77526 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:10.631292   77526 kubeconfig.go:125] found "embed-certs-223942" server: "https://192.168.72.238:8443"
	I1011 22:24:10.634516   77526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:10.644773   77526 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.238
	I1011 22:24:10.644805   77526 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:10.644821   77526 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:10.644874   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.680074   77526 cri.go:89] found id: ""
	I1011 22:24:10.680146   77526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:10.696118   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:10.705765   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:10.705789   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:10.705845   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:10.714771   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:10.714837   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:10.724255   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:10.733433   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:10.733490   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:10.742649   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.751287   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:10.751350   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.760572   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:10.769447   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:10.769517   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:10.778829   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:10.788208   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:10.900288   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.733461   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.929225   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.001383   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.093971   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:12.094053   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:08.471765   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472154   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472178   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:08.472099   78901 retry.go:31] will retry after 1.132732814s: waiting for machine to come up
	I1011 22:24:09.606499   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607030   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:09.606975   78901 retry.go:31] will retry after 1.289031778s: waiting for machine to come up
	I1011 22:24:10.897474   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.897980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.898005   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:10.897925   78901 retry.go:31] will retry after 1.601197893s: waiting for machine to come up
	I1011 22:24:12.500563   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501072   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501100   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:12.501018   78901 retry.go:31] will retry after 1.772496409s: waiting for machine to come up
	I1011 22:24:12.594492   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.094823   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.594502   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.095004   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.109230   77526 api_server.go:72] duration metric: took 2.015258789s to wait for apiserver process to appear ...
	I1011 22:24:14.109265   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:14.109291   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.439696   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.439731   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.439747   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.515797   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.515834   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.610033   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.620048   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:16.620093   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.109593   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.116698   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.116729   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.609486   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.628000   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.628031   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:18.109663   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:18.115996   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:24:18.121780   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:18.121806   77526 api_server.go:131] duration metric: took 4.012533784s to wait for apiserver health ...
	I1011 22:24:18.121816   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:18.121823   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:18.123838   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:14.275892   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276364   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:14.276305   78901 retry.go:31] will retry after 2.71082021s: waiting for machine to come up
	I1011 22:24:16.989033   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989560   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989591   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:16.989521   78901 retry.go:31] will retry after 2.502509628s: waiting for machine to come up
	I1011 22:24:18.125325   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:18.137257   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:18.154806   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:18.164291   77526 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:18.164318   77526 system_pods.go:61] "coredns-7c65d6cfc9-w8zgx" [4a8fab25-6b1a-424f-982c-2def533eb1ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:18.164325   77526 system_pods.go:61] "etcd-embed-certs-223942" [95c77be2-4ed2-45b5-b1ad-abbd3bc6de78] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:18.164332   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [51fd81a8-25e1-4d2f-b6dc-42e1b277de54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:18.164338   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [17eda746-891b-44aa-800c-fabd818db753] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:18.164357   77526 system_pods.go:61] "kube-proxy-xz284" [a24b20d5-45dd-476c-8c91-07fd5cea511b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:18.164368   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [91bf2256-7d6e-4831-aab5-d59c4f801fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:18.164382   77526 system_pods.go:61] "metrics-server-6867b74b74-9xr4k" [fc1a267e-3cb7-40f6-8908-5b304f8f5b92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:18.164398   77526 system_pods.go:61] "storage-provisioner" [77ed79d9-66ba-4262-a972-e23ce8d1878c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:18.164412   77526 system_pods.go:74] duration metric: took 9.584328ms to wait for pod list to return data ...
	I1011 22:24:18.164421   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:18.167630   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:18.167650   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:18.167660   77526 node_conditions.go:105] duration metric: took 3.235822ms to run NodePressure ...
	I1011 22:24:18.167675   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:18.453597   77526 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457919   77526 kubeadm.go:739] kubelet initialised
	I1011 22:24:18.457937   77526 kubeadm.go:740] duration metric: took 4.320725ms waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457944   77526 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:18.462432   77526 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.468402   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468426   77526 pod_ready.go:82] duration metric: took 5.974992ms for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.468435   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468441   77526 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.475031   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475048   77526 pod_ready.go:82] duration metric: took 6.600211ms for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.475056   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475061   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.479729   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479748   77526 pod_ready.go:82] duration metric: took 4.679509ms for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.479756   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479762   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:20.487624   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:19.494990   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495353   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495384   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:19.495311   78901 retry.go:31] will retry after 2.761894966s: waiting for machine to come up
	I1011 22:24:22.260471   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has current primary IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260931   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Found IP for machine: 192.168.39.162
	I1011 22:24:22.260960   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserving static IP address...
	I1011 22:24:22.261363   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserved static IP address: 192.168.39.162
	I1011 22:24:22.261401   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.261416   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for SSH to be available...
	I1011 22:24:22.261457   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | skip adding static IP to network mk-default-k8s-diff-port-070708 - found existing host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"}
	I1011 22:24:22.261493   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Getting to WaitForSSH function...
	I1011 22:24:22.263356   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263736   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.263769   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263912   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH client type: external
	I1011 22:24:22.263936   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa (-rw-------)
	I1011 22:24:22.263959   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:22.263975   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | About to run SSH command:
	I1011 22:24:22.263991   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | exit 0
	I1011 22:24:22.391349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:22.391744   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetConfigRaw
	I1011 22:24:22.392361   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.394582   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.394953   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.394987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.395205   77741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:24:22.395391   77741 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:22.395408   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:22.395620   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.397851   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398185   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.398215   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398339   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.398517   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398671   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398810   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.398947   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.399226   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.399243   77741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:22.506891   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:22.506929   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507220   77741 buildroot.go:166] provisioning hostname "default-k8s-diff-port-070708"
	I1011 22:24:22.507252   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507437   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.510300   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510694   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.510728   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510830   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.511016   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511165   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511449   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.511588   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.511783   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.511800   77741 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-070708 && echo "default-k8s-diff-port-070708" | sudo tee /etc/hostname
	I1011 22:24:22.632639   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-070708
	
	I1011 22:24:22.632673   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.635224   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635536   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.635570   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.635881   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636018   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636166   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.636312   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.636503   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.636521   77741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-070708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-070708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-070708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:22.751402   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:22.751434   77741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:22.751490   77741 buildroot.go:174] setting up certificates
	I1011 22:24:22.751505   77741 provision.go:84] configureAuth start
	I1011 22:24:22.751522   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.751753   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.754256   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754611   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.754661   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.756857   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757175   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.757207   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757327   77741 provision.go:143] copyHostCerts
	I1011 22:24:22.757384   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:22.757405   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:22.757479   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:22.757577   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:22.757586   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:22.757607   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:22.757660   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:22.757667   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:22.757683   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:22.757738   77741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-070708 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-070708 localhost minikube]
	I1011 22:24:23.136674   77741 provision.go:177] copyRemoteCerts
	I1011 22:24:23.136726   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:23.136751   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.139576   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.139909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.139939   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.140104   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.140302   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.140446   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.140553   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.224552   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:23.248389   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1011 22:24:23.271533   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:23.294727   77741 provision.go:87] duration metric: took 543.206381ms to configureAuth
	I1011 22:24:23.294757   77741 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:23.295005   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:23.295092   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.297776   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298066   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.298102   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298225   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.298447   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298609   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298747   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.298880   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.299054   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.299068   77741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.522759   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:23.522787   77741 machine.go:96] duration metric: took 1.127384391s to provisionDockerMachine
	I1011 22:24:23.522801   77741 start.go:293] postStartSetup for "default-k8s-diff-port-070708" (driver="kvm2")
	I1011 22:24:23.522814   77741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:23.522834   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.523149   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:23.523186   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.526415   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.526905   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.526927   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.527101   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.527304   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.527442   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.527548   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.609520   77741 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:23.614158   77741 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:23.614183   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:23.614257   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:23.614349   77741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:23.614460   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:23.623839   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:23.649574   77741 start.go:296] duration metric: took 126.758615ms for postStartSetup
	I1011 22:24:23.649619   77741 fix.go:56] duration metric: took 19.006146927s for fixHost
	I1011 22:24:23.649643   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.652832   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653204   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.653234   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653439   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.653633   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653815   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.654158   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.654348   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.654362   77741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:23.763396   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685463.735816087
	
	I1011 22:24:23.763417   77741 fix.go:216] guest clock: 1728685463.735816087
	I1011 22:24:23.763435   77741 fix.go:229] Guest: 2024-10-11 22:24:23.735816087 +0000 UTC Remote: 2024-10-11 22:24:23.649624165 +0000 UTC m=+280.235201903 (delta=86.191922ms)
	I1011 22:24:23.763454   77741 fix.go:200] guest clock delta is within tolerance: 86.191922ms
	I1011 22:24:23.763459   77741 start.go:83] releasing machines lock for "default-k8s-diff-port-070708", held for 19.120018362s
	I1011 22:24:23.763483   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.763750   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:23.766956   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767357   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.767399   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767553   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768140   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768301   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768388   77741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:23.768438   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.768496   77741 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:23.768518   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.771106   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771145   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771526   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771567   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771589   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771605   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771855   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.771901   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772053   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.772102   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.772171   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772276   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.883476   77741 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:23.889434   77741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:24.036410   77741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:24.042728   77741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:24.042805   77741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:24.059112   77741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:24.059137   77741 start.go:495] detecting cgroup driver to use...
	I1011 22:24:24.059201   77741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:24.075267   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:24.088163   77741 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:24.088228   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:24.106336   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:24.123084   77741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:24.242599   77741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:24.411075   77741 docker.go:233] disabling docker service ...
	I1011 22:24:24.411159   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:24.430632   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:24.447508   77741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:24.617156   77741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:24.761101   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:24.776604   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:24.799678   77741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:24.799738   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.811501   77741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:24.811576   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.822565   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.833103   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.843670   77741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:24.855800   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.868918   77741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.886996   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.897487   77741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:24.907215   77741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:24.907263   77741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:24.920391   77741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:24.931383   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:25.065929   77741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:25.164594   77741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:25.164663   77741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:25.169492   77741 start.go:563] Will wait 60s for crictl version
	I1011 22:24:25.169540   77741 ssh_runner.go:195] Run: which crictl
	I1011 22:24:25.173355   77741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:25.220778   77741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:25.220876   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.253354   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.287095   77741 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:22.488407   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:24.988742   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:25.288116   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:25.291001   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291345   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:25.291390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291579   77741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:25.295805   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:25.308821   77741 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:25.308959   77741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:25.309019   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:25.353205   77741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:25.353271   77741 ssh_runner.go:195] Run: which lz4
	I1011 22:24:25.357765   77741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:25.362126   77741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:25.362168   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:26.741249   77741 crio.go:462] duration metric: took 1.383506027s to copy over tarball
	I1011 22:24:26.741392   77741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:27.486887   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.486911   77526 pod_ready.go:82] duration metric: took 9.007140273s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.486926   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492698   77526 pod_ready.go:93] pod "kube-proxy-xz284" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.492717   77526 pod_ready.go:82] duration metric: took 5.784843ms for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492726   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:29.499666   77526 pod_ready.go:103] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:32.137260   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:32.137292   77526 pod_ready.go:82] duration metric: took 4.644558899s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:32.137307   77526 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:28.851248   77741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1098168s)
	I1011 22:24:28.851285   77741 crio.go:469] duration metric: took 2.109983801s to extract the tarball
	I1011 22:24:28.851294   77741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:28.888408   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:28.933361   77741 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:28.933384   77741 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:28.933391   77741 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.31.1 crio true true} ...
	I1011 22:24:28.933510   77741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-070708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:28.933589   77741 ssh_runner.go:195] Run: crio config
	I1011 22:24:28.982515   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:28.982541   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:28.982554   77741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:28.982582   77741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-070708 NodeName:default-k8s-diff-port-070708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:28.982781   77741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-070708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:28.982862   77741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:28.993780   77741 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:28.993846   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:29.005252   77741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1011 22:24:29.023922   77741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:29.042177   77741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 22:24:29.059529   77741 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:29.063600   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:29.078061   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:29.204249   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:29.221115   77741 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708 for IP: 192.168.39.162
	I1011 22:24:29.221141   77741 certs.go:194] generating shared ca certs ...
	I1011 22:24:29.221161   77741 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:29.221349   77741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:29.221402   77741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:29.221413   77741 certs.go:256] generating profile certs ...
	I1011 22:24:29.221493   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/client.key
	I1011 22:24:29.221568   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key.07f8f6d8
	I1011 22:24:29.221645   77741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key
	I1011 22:24:29.221767   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:29.221803   77741 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:29.221812   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:29.221832   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:29.221853   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:29.221872   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:29.221929   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:29.222760   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:29.262636   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:29.308886   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:29.348949   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:29.378795   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 22:24:29.426593   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:29.465414   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:29.491216   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:29.518262   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:29.542270   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:29.565664   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:29.588852   77741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:29.606630   77741 ssh_runner.go:195] Run: openssl version
	I1011 22:24:29.612594   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:29.623089   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627591   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627656   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.633544   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:29.644199   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:29.654783   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661009   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661061   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.668950   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:29.684757   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:29.700687   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705578   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705646   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.711533   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:29.722714   77741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:29.727419   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:29.733494   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:29.739565   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:29.745569   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:29.751428   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:29.757368   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:29.763272   77741 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:29.763379   77741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:29.763436   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.805191   77741 cri.go:89] found id: ""
	I1011 22:24:29.805263   77741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:29.819025   77741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:29.819049   77741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:29.819098   77741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:29.828470   77741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:29.829347   77741 kubeconfig.go:125] found "default-k8s-diff-port-070708" server: "https://192.168.39.162:8444"
	I1011 22:24:29.831385   77741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:29.840601   77741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1011 22:24:29.840630   77741 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:29.840640   77741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:29.840691   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.880123   77741 cri.go:89] found id: ""
	I1011 22:24:29.880199   77741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:29.897250   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:29.908273   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:29.908293   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:29.908340   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:24:29.917052   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:29.917110   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:29.926121   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:24:29.935494   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:29.935552   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:29.944951   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.953829   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:29.953890   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.963554   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:24:29.972917   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:29.972979   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:29.981962   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:29.990859   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.116668   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.856369   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.204973   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.261641   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.313332   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:31.313450   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:31.814503   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.313812   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.813821   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.833106   77741 api_server.go:72] duration metric: took 1.519770408s to wait for apiserver process to appear ...
	I1011 22:24:32.833142   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:32.833166   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.028524   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.028557   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.028573   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.035621   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.035651   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.334128   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.339051   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.339075   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:35.833305   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.838821   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.838851   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:36.333367   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:36.338371   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:24:36.344660   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:36.344684   77741 api_server.go:131] duration metric: took 3.511533712s to wait for apiserver health ...
	I1011 22:24:36.344694   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:36.344703   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:36.346229   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:34.148281   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:36.645574   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:36.347449   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:36.361278   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:36.384091   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:36.399422   77741 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:36.399482   77741 system_pods.go:61] "coredns-7c65d6cfc9-bpv5v" [76f03ec1-b826-412f-8bb2-fcd555185dd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:36.399503   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [5f021850-47af-442e-81f9-fccf153afb5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:36.399521   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [12777485-8206-495d-9223-06574b1410a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:36.399557   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [4261e9f7-6e66-44d3-abbb-6fd541e62c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:36.399567   77741 system_pods.go:61] "kube-proxy-hsjth" [7ba3e685-be57-4e46-ac49-279bd32ca049] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:36.399575   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [1d170237-0bbe-4832-b5d2-cea7a11d5aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:36.399585   77741 system_pods.go:61] "metrics-server-6867b74b74-l7xbw" [998853a5-4215-4f3d-baa5-84e8f6bb91ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:36.399599   77741 system_pods.go:61] "storage-provisioner" [f618ffde-9d3a-43fd-999a-3855ac5de5d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:36.399612   77741 system_pods.go:74] duration metric: took 15.498192ms to wait for pod list to return data ...
	I1011 22:24:36.399627   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:36.403628   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:36.403652   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:36.403663   77741 node_conditions.go:105] duration metric: took 4.030681ms to run NodePressure ...
	I1011 22:24:36.403677   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:36.705101   77741 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710495   77741 kubeadm.go:739] kubelet initialised
	I1011 22:24:36.710514   77741 kubeadm.go:740] duration metric: took 5.389006ms waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710521   77741 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:36.715511   77741 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:39.144299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.144365   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:38.723972   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.221423   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:43.222431   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.627428   77373 start.go:364] duration metric: took 54.46189221s to acquireMachinesLock for "no-preload-390487"
	I1011 22:24:44.627494   77373 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:44.627505   77373 fix.go:54] fixHost starting: 
	I1011 22:24:44.627904   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:44.627943   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:44.647097   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I1011 22:24:44.647594   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:44.648124   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:24:44.648149   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:44.648538   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:44.648719   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:24:44.648881   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:24:44.650660   77373 fix.go:112] recreateIfNeeded on no-preload-390487: state=Stopped err=<nil>
	I1011 22:24:44.650685   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	W1011 22:24:44.650829   77373 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:44.652887   77373 out.go:177] * Restarting existing kvm2 VM for "no-preload-390487" ...
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:43.144430   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:45.645398   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.654550   77373 main.go:141] libmachine: (no-preload-390487) Calling .Start
	I1011 22:24:44.654840   77373 main.go:141] libmachine: (no-preload-390487) Ensuring networks are active...
	I1011 22:24:44.655546   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network default is active
	I1011 22:24:44.656008   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network mk-no-preload-390487 is active
	I1011 22:24:44.656383   77373 main.go:141] libmachine: (no-preload-390487) Getting domain xml...
	I1011 22:24:44.657065   77373 main.go:141] libmachine: (no-preload-390487) Creating domain...
	I1011 22:24:45.980644   77373 main.go:141] libmachine: (no-preload-390487) Waiting to get IP...
	I1011 22:24:45.981635   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:45.982101   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:45.982167   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:45.982078   79243 retry.go:31] will retry after 195.443447ms: waiting for machine to come up
	I1011 22:24:46.179539   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.179999   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.180030   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.179953   79243 retry.go:31] will retry after 322.117828ms: waiting for machine to come up
	I1011 22:24:46.503434   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.503947   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.503969   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.503915   79243 retry.go:31] will retry after 295.160677ms: waiting for machine to come up
	I1011 22:24:46.801184   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.801763   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.801797   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.801716   79243 retry.go:31] will retry after 396.903731ms: waiting for machine to come up
	I1011 22:24:47.200047   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.200515   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.200543   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.200480   79243 retry.go:31] will retry after 750.816077ms: waiting for machine to come up
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:44.735539   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:44.735561   77741 pod_ready.go:82] duration metric: took 8.020026677s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:44.735576   77741 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:46.744354   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:48.144609   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:50.149053   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:47.952867   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.953464   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.953495   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.953288   79243 retry.go:31] will retry after 639.218351ms: waiting for machine to come up
	I1011 22:24:48.594034   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:48.594428   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:48.594484   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:48.594409   79243 retry.go:31] will retry after 884.81772ms: waiting for machine to come up
	I1011 22:24:49.480960   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:49.481335   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:49.481362   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:49.481290   79243 retry.go:31] will retry after 1.298501886s: waiting for machine to come up
	I1011 22:24:50.781446   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:50.781854   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:50.781878   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:50.781800   79243 retry.go:31] will retry after 1.856156849s: waiting for machine to come up
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:49.242663   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:51.875476   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:53.411915   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.411937   77741 pod_ready.go:82] duration metric: took 8.676353233s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.411950   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418808   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.418827   77741 pod_ready.go:82] duration metric: took 6.869777ms for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418838   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428224   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.428257   77741 pod_ready.go:82] duration metric: took 9.411307ms for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428270   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438263   77741 pod_ready.go:93] pod "kube-proxy-hsjth" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.438293   77741 pod_ready.go:82] duration metric: took 10.015779ms for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438307   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444909   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.444932   77741 pod_ready.go:82] duration metric: took 6.618233ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444943   77741 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:52.646299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:55.144236   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:52.640024   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:52.640568   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:52.640600   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:52.640516   79243 retry.go:31] will retry after 1.634063154s: waiting for machine to come up
	I1011 22:24:54.275779   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:54.276278   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:54.276307   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:54.276222   79243 retry.go:31] will retry after 2.141763066s: waiting for machine to come up
	I1011 22:24:56.419913   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:56.420312   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:56.420333   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:56.420279   79243 retry.go:31] will retry after 3.322852036s: waiting for machine to come up
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.451196   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.452254   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.645338   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:00.142994   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.147083   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:59.745010   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:59.745433   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:59.745457   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:59.745377   79243 retry.go:31] will retry after 4.379442156s: waiting for machine to come up
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.953903   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.451156   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:04.127900   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has current primary IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128566   77373 main.go:141] libmachine: (no-preload-390487) Found IP for machine: 192.168.61.55
	I1011 22:25:04.128581   77373 main.go:141] libmachine: (no-preload-390487) Reserving static IP address...
	I1011 22:25:04.129112   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.129144   77373 main.go:141] libmachine: (no-preload-390487) DBG | skip adding static IP to network mk-no-preload-390487 - found existing host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"}
	I1011 22:25:04.129157   77373 main.go:141] libmachine: (no-preload-390487) Reserved static IP address: 192.168.61.55
	I1011 22:25:04.129170   77373 main.go:141] libmachine: (no-preload-390487) Waiting for SSH to be available...
	I1011 22:25:04.129179   77373 main.go:141] libmachine: (no-preload-390487) DBG | Getting to WaitForSSH function...
	I1011 22:25:04.131402   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131668   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.131698   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131864   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH client type: external
	I1011 22:25:04.131892   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa (-rw-------)
	I1011 22:25:04.131922   77373 main.go:141] libmachine: (no-preload-390487) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:25:04.131936   77373 main.go:141] libmachine: (no-preload-390487) DBG | About to run SSH command:
	I1011 22:25:04.131950   77373 main.go:141] libmachine: (no-preload-390487) DBG | exit 0
	I1011 22:25:04.258578   77373 main.go:141] libmachine: (no-preload-390487) DBG | SSH cmd err, output: <nil>: 
	I1011 22:25:04.258971   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetConfigRaw
	I1011 22:25:04.259663   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.262128   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262510   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.262542   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262838   77373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:25:04.263066   77373 machine.go:93] provisionDockerMachine start ...
	I1011 22:25:04.263088   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:04.263316   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.265560   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.265843   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.265862   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.266086   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.266277   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266448   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266597   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.266755   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.266968   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.266982   77373 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:25:04.375270   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:25:04.375306   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375541   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:25:04.375564   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375718   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.378706   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379069   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.379091   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379315   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.379515   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.380026   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.380213   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.380224   77373 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-390487 && echo "no-preload-390487" | sudo tee /etc/hostname
	I1011 22:25:04.503359   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-390487
	
	I1011 22:25:04.503392   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.506163   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506502   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.506537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506742   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.506924   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507077   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507332   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.507483   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.507660   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.507676   77373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-390487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-390487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-390487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:25:04.624804   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:25:04.624850   77373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:25:04.624880   77373 buildroot.go:174] setting up certificates
	I1011 22:25:04.624893   77373 provision.go:84] configureAuth start
	I1011 22:25:04.624909   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.625190   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.627950   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628278   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.628320   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628458   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.630686   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631012   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.631040   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631168   77373 provision.go:143] copyHostCerts
	I1011 22:25:04.631234   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:25:04.631255   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:25:04.631328   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:25:04.631438   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:25:04.631450   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:25:04.631488   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:25:04.631564   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:25:04.631575   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:25:04.631600   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:25:04.631668   77373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.no-preload-390487 san=[127.0.0.1 192.168.61.55 localhost minikube no-preload-390487]
	I1011 22:25:04.736741   77373 provision.go:177] copyRemoteCerts
	I1011 22:25:04.736802   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:25:04.736830   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.739358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739665   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.739695   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.740016   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.740156   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.740291   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:04.826024   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:25:04.851100   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:25:04.875010   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:25:04.899107   77373 provision.go:87] duration metric: took 274.198948ms to configureAuth
	I1011 22:25:04.899133   77373 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:25:04.899323   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:25:04.899405   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.901744   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902079   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.902108   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902320   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.902518   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902689   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902911   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.903095   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.903284   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.903304   77373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:25:05.129377   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:25:05.129406   77373 machine.go:96] duration metric: took 866.326736ms to provisionDockerMachine
	I1011 22:25:05.129420   77373 start.go:293] postStartSetup for "no-preload-390487" (driver="kvm2")
	I1011 22:25:05.129435   77373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:25:05.129455   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.129768   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:25:05.129798   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.132216   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132539   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.132579   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132703   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.132891   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.133039   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.133177   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.219144   77373 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:25:05.223510   77373 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:25:05.223549   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:25:05.223634   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:25:05.223728   77373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:25:05.223837   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:25:05.234069   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:05.259266   77373 start.go:296] duration metric: took 129.829951ms for postStartSetup
	I1011 22:25:05.259313   77373 fix.go:56] duration metric: took 20.631808044s for fixHost
	I1011 22:25:05.259335   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.262071   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262313   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.262340   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262493   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.262702   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.262899   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.263030   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.263243   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:05.263425   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:05.263470   77373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:25:05.367341   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685505.320713090
	
	I1011 22:25:05.367368   77373 fix.go:216] guest clock: 1728685505.320713090
	I1011 22:25:05.367378   77373 fix.go:229] Guest: 2024-10-11 22:25:05.32071309 +0000 UTC Remote: 2024-10-11 22:25:05.259318089 +0000 UTC m=+357.684959787 (delta=61.395001ms)
	I1011 22:25:05.367397   77373 fix.go:200] guest clock delta is within tolerance: 61.395001ms
	I1011 22:25:05.367409   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 20.739943225s
	I1011 22:25:05.367428   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.367673   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:05.370276   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370611   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.370648   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370815   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371423   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371608   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371674   77373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:25:05.371726   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.371914   77373 ssh_runner.go:195] Run: cat /version.json
	I1011 22:25:05.371939   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.374358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374730   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.374764   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374794   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374915   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375073   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375227   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375232   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.375256   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.375342   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.375449   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375560   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375714   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375819   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.482886   77373 ssh_runner.go:195] Run: systemctl --version
	I1011 22:25:05.489351   77373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:25:05.643786   77373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:25:05.650229   77373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:25:05.650296   77373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:25:05.666494   77373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:25:05.666522   77373 start.go:495] detecting cgroup driver to use...
	I1011 22:25:05.666582   77373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:25:05.683659   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:25:05.697066   77373 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:25:05.697119   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:25:05.712780   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:25:05.728824   77373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:25:05.844693   77373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:25:06.021006   77373 docker.go:233] disabling docker service ...
	I1011 22:25:06.021064   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:25:06.035844   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:25:06.049585   77373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:25:06.194294   77373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:25:06.333778   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:25:06.349522   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:25:06.370214   77373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:25:06.370285   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.380680   77373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:25:06.380751   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.390974   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.402539   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.414129   77373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:25:06.425521   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.435647   77373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.453454   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.463564   77373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:25:06.473487   77373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:25:06.473560   77373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:25:06.487972   77373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:25:06.498579   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:06.626975   77373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:25:06.736608   77373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:25:06.736681   77373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:25:06.742858   77373 start.go:563] Will wait 60s for crictl version
	I1011 22:25:06.742916   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:06.746699   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:25:06.785073   77373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:25:06.785172   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.812373   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.842453   77373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:25:04.645257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.143877   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.843849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:06.846526   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.846822   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:06.846870   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.847073   77373 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:25:06.851361   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:06.864316   77373 kubeadm.go:883] updating cluster {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:25:06.864426   77373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:25:06.864455   77373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:25:06.904225   77373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:25:06.904253   77373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:25:06.904307   77373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:06.904342   77373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.904360   77373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.904376   77373 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.904363   77373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.904475   77373 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.904499   77373 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 22:25:06.904480   77373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.905783   77373 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.905694   77373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.905680   77373 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.905686   77373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:07.057329   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.060095   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.080674   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1011 22:25:07.081598   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.085905   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.097740   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.106415   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.136780   77373 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1011 22:25:07.136834   77373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.136888   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.152692   77373 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1011 22:25:07.152730   77373 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.152784   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341838   77373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1011 22:25:07.341882   77373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.341890   77373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1011 22:25:07.341916   77373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.341929   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341947   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341973   77373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1011 22:25:07.341998   77373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1011 22:25:07.342007   77373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.342041   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.342014   77373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.342058   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.342053   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.342099   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.355230   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.355409   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.439441   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.439572   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.444043   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.444071   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.578269   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.578424   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.580474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.580516   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.580535   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.580606   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.451555   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.951138   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:09.144842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:11.643405   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.697848   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 22:25:07.697957   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.697984   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.722151   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1011 22:25:07.722269   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:07.734336   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 22:25:07.734449   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:07.734475   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.734489   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1011 22:25:07.734500   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 22:25:07.734508   77373 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734541   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734578   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:07.788345   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 22:25:07.788371   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1011 22:25:07.788446   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:07.816070   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1011 22:25:07.816308   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 22:25:07.816394   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:08.066781   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.943666   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.209065908s)
	I1011 22:25:09.943709   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1011 22:25:09.943750   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.20918304s)
	I1011 22:25:09.943771   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1011 22:25:09.943779   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.155317638s)
	I1011 22:25:09.943793   77373 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943796   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1011 22:25:09.943829   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.127421611s)
	I1011 22:25:09.943841   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943848   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1011 22:25:09.943878   77373 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.877065002s)
	I1011 22:25:09.943925   77373 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 22:25:09.943968   77373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.944013   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.951973   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:10.953032   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.644601   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.645917   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.641438   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.697578704s)
	I1011 22:25:13.641519   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1011 22:25:13.641523   77373 ssh_runner.go:235] Completed: which crictl: (3.697489585s)
	I1011 22:25:13.641556   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641597   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641598   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534810   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.893187916s)
	I1011 22:25:15.534865   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1011 22:25:15.534893   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.893219513s)
	I1011 22:25:15.534963   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534898   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:15.535027   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.452229   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.951490   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.952280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:18.143828   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:20.144712   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.707389   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.172401078s)
	I1011 22:25:17.707420   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.172369128s)
	I1011 22:25:17.707443   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1011 22:25:17.707474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:17.707476   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:17.707644   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:19.168147   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460475389s)
	I1011 22:25:19.168190   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1011 22:25:19.168156   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.460655676s)
	I1011 22:25:19.168221   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168242   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 22:25:19.168276   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168336   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.123906   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.955605804s)
	I1011 22:25:21.123945   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1011 22:25:21.123991   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.955631476s)
	I1011 22:25:21.124019   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1011 22:25:21.124030   77373 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.124068   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.773002   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 22:25:21.773050   77373 cache_images.go:123] Successfully loaded all cached images
	I1011 22:25:21.773057   77373 cache_images.go:92] duration metric: took 14.868794284s to LoadCachedImages
	I1011 22:25:21.773074   77373 kubeadm.go:934] updating node { 192.168.61.55 8443 v1.31.1 crio true true} ...
	I1011 22:25:21.773185   77373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-390487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:25:21.773265   77373 ssh_runner.go:195] Run: crio config
	I1011 22:25:21.821268   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:21.821291   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:21.821301   77373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:25:21.821321   77373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-390487 NodeName:no-preload-390487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:25:21.821490   77373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-390487"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:25:21.821564   77373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:25:21.832830   77373 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:25:21.832905   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:25:21.842726   77373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1011 22:25:21.859739   77373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:25:21.876192   77373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1011 22:25:21.893366   77373 ssh_runner.go:195] Run: grep 192.168.61.55	control-plane.minikube.internal$ /etc/hosts
	I1011 22:25:21.897435   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:21.909840   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:22.021697   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:25:22.039163   77373 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487 for IP: 192.168.61.55
	I1011 22:25:22.039187   77373 certs.go:194] generating shared ca certs ...
	I1011 22:25:22.039207   77373 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:25:22.039385   77373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:25:22.039449   77373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:25:22.039462   77373 certs.go:256] generating profile certs ...
	I1011 22:25:22.039587   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/client.key
	I1011 22:25:22.039668   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key.6a466d38
	I1011 22:25:22.039713   77373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key
	I1011 22:25:22.039858   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:25:22.039901   77373 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:25:22.039912   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:25:22.039959   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:25:22.040001   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:25:22.040029   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:25:22.040089   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:22.040914   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:25:22.077604   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:25:22.133879   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:25:22.164886   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:25:22.197655   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:25:22.229594   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:25:22.264506   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:25:22.287571   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:25:22.310555   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:25:22.333333   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:25:22.356094   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:25:22.380156   77373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:25:22.398056   77373 ssh_runner.go:195] Run: openssl version
	I1011 22:25:22.403799   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:25:22.415645   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420352   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420411   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.426457   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:25:22.438182   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:25:22.449704   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454778   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454840   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.460601   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:25:22.472587   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:25:22.485096   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489673   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489729   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.495547   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:25:22.507652   77373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:25:22.513081   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:25:22.519287   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:25:22.525159   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:25:22.531170   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:25:22.537321   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:25:22.543093   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:25:22.548832   77373 kubeadm.go:392] StartCluster: {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:25:22.548926   77373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:25:22.548972   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.594269   77373 cri.go:89] found id: ""
	I1011 22:25:22.594341   77373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:25:22.604950   77373 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:25:22.604976   77373 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:25:22.605025   77373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.452376   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.950987   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.644866   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:25.143773   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.144243   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.615035   77373 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:25:22.615951   77373 kubeconfig.go:125] found "no-preload-390487" server: "https://192.168.61.55:8443"
	I1011 22:25:22.618000   77373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:25:22.628327   77373 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.55
	I1011 22:25:22.628367   77373 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:25:22.628379   77373 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:25:22.628426   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.681709   77373 cri.go:89] found id: ""
	I1011 22:25:22.681769   77373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:25:22.697989   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:25:22.707772   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:25:22.707792   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:25:22.707838   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:25:22.716928   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:25:22.716984   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:25:22.726327   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:25:22.735769   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:25:22.735819   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:25:22.745468   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.754493   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:25:22.754552   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.764062   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:25:22.773234   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:25:22.773298   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:25:22.782913   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:25:22.792119   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:22.910184   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:23.868070   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.095326   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.164924   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.251769   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:25:24.251852   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.752110   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.252591   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.278468   77373 api_server.go:72] duration metric: took 1.026698113s to wait for apiserver process to appear ...
	I1011 22:25:25.278498   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:25:25.278521   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:25.278974   77373 api_server.go:269] stopped: https://192.168.61.55:8443/healthz: Get "https://192.168.61.55:8443/healthz": dial tcp 192.168.61.55:8443: connect: connection refused
	I1011 22:25:25.778778   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.951896   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.451534   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.012373   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.012412   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.012437   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.099444   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.099503   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.278723   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.284616   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.284647   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:28.779287   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.786100   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.786125   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:29.278680   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:29.285168   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:25:29.291497   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:25:29.291526   77373 api_server.go:131] duration metric: took 4.013020818s to wait for apiserver health ...
	I1011 22:25:29.291537   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:29.291545   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:29.293325   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:25:29.644410   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:32.144466   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:29.294582   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:25:29.306107   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:25:29.331655   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:25:29.346931   77373 system_pods.go:59] 8 kube-system pods found
	I1011 22:25:29.346973   77373 system_pods.go:61] "coredns-7c65d6cfc9-5z4p5" [a369ddfd-01d5-4d2a-a63b-ab36b26f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:25:29.346986   77373 system_pods.go:61] "etcd-no-preload-390487" [b9aa7965-9be2-43b4-a291-246e5f27fa00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:25:29.346998   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [17e9a39a-2084-4504-8f9c-602cad87536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:25:29.347004   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [c4dc9017-6062-444e-b11f-23762dc5ef3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:25:29.347010   77373 system_pods.go:61] "kube-proxy-82p2c" [555091e0-b40d-49a6-a964-80baf143c001] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:25:29.347029   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [dcfc8186-23f5-4744-93f8-080180f93be6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:25:29.347034   77373 system_pods.go:61] "metrics-server-6867b74b74-tk8fq" [8fb649e0-2af0-4655-8251-356873e2213e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:25:29.347041   77373 system_pods.go:61] "storage-provisioner" [a01f8ac1-6d29-4885-86a7-c7ef0c289b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:25:29.347047   77373 system_pods.go:74] duration metric: took 15.369022ms to wait for pod list to return data ...
	I1011 22:25:29.347055   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:25:29.352543   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:25:29.352576   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:25:29.352590   77373 node_conditions.go:105] duration metric: took 5.52943ms to run NodePressure ...
	I1011 22:25:29.352613   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:29.648681   77373 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652653   77373 kubeadm.go:739] kubelet initialised
	I1011 22:25:29.652671   77373 kubeadm.go:740] duration metric: took 3.972281ms waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652679   77373 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:25:29.658454   77373 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.663740   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663768   77373 pod_ready.go:82] duration metric: took 5.289381ms for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.663780   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663791   77373 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.668667   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668693   77373 pod_ready.go:82] duration metric: took 4.892171ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.668704   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668714   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.673134   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673157   77373 pod_ready.go:82] duration metric: took 4.432292ms for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.673168   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673177   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.734940   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734965   77373 pod_ready.go:82] duration metric: took 61.774649ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.734974   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734980   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134816   77373 pod_ready.go:93] pod "kube-proxy-82p2c" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:30.134843   77373 pod_ready.go:82] duration metric: took 399.851043ms for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134856   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:32.143137   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.451926   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:31.452961   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.145457   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.643721   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.143610   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.641435   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.951339   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:35.952408   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.450537   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.644336   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.144815   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.642041   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.143153   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.641922   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:41.641949   77373 pod_ready.go:82] duration metric: took 11.507084936s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:41.641962   77373 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.451326   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:42.451670   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.643191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.643486   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.648037   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.648090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.451907   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:46.950763   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.144086   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.144203   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.144498   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:47.649490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.148831   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.148997   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.451637   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:51.952434   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.643929   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.645968   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.149692   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.649774   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:54.456563   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.952097   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.143734   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.146350   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.148063   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.148608   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:59.451436   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.452136   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.643807   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.644664   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.149089   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.152410   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:26:03.952386   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.952562   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.645291   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.145051   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.146419   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.650259   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.149242   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.149684   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.452508   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.952101   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.953125   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.645067   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.646842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.149723   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.649874   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:15.451781   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:17.951419   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.144547   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.145544   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.151415   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.649239   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:20.450507   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.450680   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:23.643586   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.144617   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:24.149159   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.649041   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:24.451584   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.451685   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.643757   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.143786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.650003   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.148367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:28.952410   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.450990   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.644354   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.648376   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:33.951428   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.951901   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.143140   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.144224   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.649082   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:39.650580   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.148945   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.451431   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.951642   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.951753   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.644548   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.144055   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.649705   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.148731   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.952282   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.451649   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.643384   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:52.143369   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.150143   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.648664   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:49.951912   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.955275   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.144438   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.145153   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.149232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.648245   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.451964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.952084   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:58.643527   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:00.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.148916   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.452217   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.951461   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:03.143621   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:05.644225   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:04.148533   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.149378   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:03.952598   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.451802   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:07.645531   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.144141   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.144739   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:08.648935   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.649185   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:08.951257   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.951915   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:13.451012   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:14.643844   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:16.643951   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.651766   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.148176   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.148231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:15.452119   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.951679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.144439   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.644786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.647581   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.649450   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:19.952158   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.952425   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.143206   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.143587   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.148823   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:27:24.452366   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.950804   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:28.143916   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:30.644830   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:29.149277   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.648385   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:28.952135   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.452143   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.143604   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:35.149383   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.649481   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.148545   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:33.951885   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.450920   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:37.643707   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.644162   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.143479   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:38.649090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:41.148009   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:38.451524   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:40.954861   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:44.143555   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:46.143976   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:43.149295   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.648165   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.451177   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.451493   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.951335   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.145191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.643798   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.648963   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.150518   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.951782   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.451312   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:53.143194   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.143896   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.144275   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.647882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:54.648946   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:56.649832   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:54.951866   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.450974   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.144335   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.144508   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.148539   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.652535   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:59.452019   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.951319   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:03.643904   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.142780   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.149856   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:03.951539   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:05.955152   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.450679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.143240   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.144659   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.649122   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:11.148403   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.451221   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.952631   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.644135   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.143192   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.144402   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.148449   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.648534   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:15.450809   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.451351   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.644591   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.144568   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:18.147818   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:20.150891   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.951122   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:21.951323   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.643512   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:27.143639   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.648755   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.648851   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:24.450975   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.451800   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:29.147583   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.644088   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:32.137388   77526 pod_ready.go:82] duration metric: took 4m0.000065444s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:32.137437   77526 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:32.137454   77526 pod_ready.go:39] duration metric: took 4m13.67950194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:32.137478   77526 kubeadm.go:597] duration metric: took 4m21.517496572s to restartPrimaryControlPlane
	W1011 22:28:32.137532   77526 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:32.137562   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:29.150291   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.649055   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:28.951749   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:30.952578   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.149312   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:36.648952   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:33.451778   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:35.951964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:39.148016   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:41.149360   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.454387   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:40.952061   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:43.649642   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:46.148528   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:43.451280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:45.952227   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.451044   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.149006   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.649552   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.451478   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:52.951299   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:53.445808   77741 pod_ready.go:82] duration metric: took 4m0.000846456s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:53.445846   77741 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:53.445869   77741 pod_ready.go:39] duration metric: took 4m16.735338637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:53.445899   77741 kubeadm.go:597] duration metric: took 4m23.626843864s to restartPrimaryControlPlane
	W1011 22:28:53.445964   77741 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:53.445996   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:53.148309   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:55.149231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.243748   77526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.106148594s)
	I1011 22:28:58.243837   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:58.263915   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:58.281349   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:58.297636   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:58.297661   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:58.297710   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:58.311371   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:58.311444   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:58.330584   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:58.350348   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:58.350403   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:58.376417   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.390350   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:58.390399   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.404955   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:58.416263   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:58.416322   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:58.425942   77526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:58.478782   77526 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:28:58.478835   77526 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:58.590185   77526 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:58.590333   77526 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:58.590451   77526 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:28:58.598371   77526 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:58.600253   77526 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:58.600357   77526 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:58.600458   77526 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:58.600569   77526 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:58.600657   77526 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:58.600761   77526 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:58.600827   77526 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:58.600913   77526 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:58.601018   77526 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:58.601122   77526 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:58.601250   77526 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:58.601335   77526 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:58.601417   77526 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:58.951248   77526 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:59.187453   77526 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:28:59.496055   77526 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:59.583363   77526 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:59.747699   77526 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:59.748339   77526 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:59.750963   77526 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:59.752710   77526 out.go:235]   - Booting up control plane ...
	I1011 22:28:59.752858   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:59.752956   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:59.753174   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:59.770682   77526 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:59.776919   77526 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:59.776989   77526 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:59.900964   77526 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:28:59.901122   77526 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:00.402400   77526 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.862362ms
	I1011 22:29:00.402529   77526 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:28:57.648367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:00.148371   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:02.153536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:05.905921   77526 kubeadm.go:310] [api-check] The API server is healthy after 5.501955207s
	I1011 22:29:05.918054   77526 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:05.936720   77526 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:05.982293   77526 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:05.982571   77526 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-223942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:06.007168   77526 kubeadm.go:310] [bootstrap-token] Using token: a4lu2p.4yfrrazoy97j5yu0
	I1011 22:29:06.008642   77526 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:06.008749   77526 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:06.020393   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:06.032191   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:06.039269   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:06.043990   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:06.053648   77526 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:06.312388   77526 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:06.740160   77526 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:07.315305   77526 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:07.317697   77526 kubeadm.go:310] 
	I1011 22:29:07.317793   77526 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:07.317806   77526 kubeadm.go:310] 
	I1011 22:29:07.317929   77526 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:07.317950   77526 kubeadm.go:310] 
	I1011 22:29:07.318009   77526 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:07.318126   77526 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:07.318222   77526 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:07.318232   77526 kubeadm.go:310] 
	I1011 22:29:07.318281   77526 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:07.318289   77526 kubeadm.go:310] 
	I1011 22:29:07.318339   77526 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:07.318350   77526 kubeadm.go:310] 
	I1011 22:29:07.318424   77526 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:07.318528   77526 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:07.318630   77526 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:07.318644   77526 kubeadm.go:310] 
	I1011 22:29:07.318750   77526 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:07.318823   77526 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:07.318830   77526 kubeadm.go:310] 
	I1011 22:29:07.318913   77526 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319086   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:07.319124   77526 kubeadm.go:310] 	--control-plane 
	I1011 22:29:07.319133   77526 kubeadm.go:310] 
	I1011 22:29:07.319256   77526 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:07.319264   77526 kubeadm.go:310] 
	I1011 22:29:07.319366   77526 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319505   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:07.321368   77526 kubeadm.go:310] W1011 22:28:58.449635    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321691   77526 kubeadm.go:310] W1011 22:28:58.450407    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321866   77526 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:07.321888   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:29:07.321899   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:07.323580   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:07.324762   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:07.335614   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:04.648441   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:06.648506   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:07.354851   77526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:07.355473   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:07.355479   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-223942 minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=embed-certs-223942 minikube.k8s.io/primary=true
	I1011 22:29:07.397703   77526 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:07.581167   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.081395   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.582200   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.081862   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.581361   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.081246   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.581754   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.081988   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.179021   77526 kubeadm.go:1113] duration metric: took 3.82416989s to wait for elevateKubeSystemPrivileges
	I1011 22:29:11.179061   77526 kubeadm.go:394] duration metric: took 5m0.606049956s to StartCluster
	I1011 22:29:11.179086   77526 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.179171   77526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:11.181572   77526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.181873   77526 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:11.181938   77526 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:11.182035   77526 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223942"
	I1011 22:29:11.182059   77526 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223942"
	I1011 22:29:11.182060   77526 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223942"
	W1011 22:29:11.182070   77526 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:11.182078   77526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223942"
	I1011 22:29:11.182102   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182114   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:11.182091   77526 addons.go:69] Setting metrics-server=true in profile "embed-certs-223942"
	I1011 22:29:11.182147   77526 addons.go:234] Setting addon metrics-server=true in "embed-certs-223942"
	W1011 22:29:11.182161   77526 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:11.182196   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182515   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182558   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182579   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182692   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.183573   77526 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:11.184930   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:11.198456   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1011 22:29:11.198666   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I1011 22:29:11.199044   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199141   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199592   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199607   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199726   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199744   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199950   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200104   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200248   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.200557   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.200608   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.201637   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1011 22:29:11.202066   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.202541   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.202560   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.202894   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.203434   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.203474   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.204227   77526 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223942"
	W1011 22:29:11.204249   77526 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:11.204281   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.204663   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.204707   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.218765   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I1011 22:29:11.218894   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I1011 22:29:11.219238   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219244   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219747   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219772   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.219949   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219970   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.220019   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220167   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220232   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220785   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220847   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1011 22:29:11.221152   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.221591   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.221614   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.222116   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.222135   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222401   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222916   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.222955   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.224006   77526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:11.224007   77526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:11.225424   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:11.225455   77526 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:11.225474   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.226095   77526 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.226115   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:11.226131   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.228914   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229448   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.229472   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229542   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229584   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.229744   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230021   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.230025   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230037   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.230118   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.230496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.230648   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230781   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230897   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.238742   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1011 22:29:11.239211   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.239762   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.239786   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.240061   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.240238   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.241740   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.241967   77526 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.241986   77526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:11.242007   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.244886   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245237   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.245260   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245501   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.245684   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.245882   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.246052   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.365926   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:11.391766   77526 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401923   77526 node_ready.go:49] node "embed-certs-223942" has status "Ready":"True"
	I1011 22:29:11.401943   77526 node_ready.go:38] duration metric: took 10.139287ms for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401952   77526 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:11.406561   77526 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:11.460959   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:11.460992   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:11.475600   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.481436   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:11.481465   77526 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:11.515478   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.515500   77526 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:11.558164   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.569398   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.795782   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.795805   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796093   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:11.796119   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796137   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.796152   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.796163   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796373   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796389   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809155   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.809176   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.809439   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.809457   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809463   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475441   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475469   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.475720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475769   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.475789   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.475805   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475815   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.476016   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.476027   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.476031   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.476041   77526 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223942"
	I1011 22:29:12.503190   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503219   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503530   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503574   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.503588   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503598   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503834   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503850   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.505379   77526 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1011 22:29:09.149809   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:11.650232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:12.506382   77526 addons.go:510] duration metric: took 1.324453305s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1011 22:29:13.412840   77526 pod_ready.go:103] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:13.918905   77526 pod_ready.go:93] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:13.918926   77526 pod_ready.go:82] duration metric: took 2.512345346s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:13.918936   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:15.925307   77526 pod_ready.go:103] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:14.149051   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:16.649622   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:17.925327   77526 pod_ready.go:93] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.925353   77526 pod_ready.go:82] duration metric: took 4.006410198s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.925366   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929846   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.929872   77526 pod_ready.go:82] duration metric: took 4.495642ms for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929883   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933635   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.933652   77526 pod_ready.go:82] duration metric: took 3.761139ms for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933661   77526 pod_ready.go:39] duration metric: took 6.531698315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:17.933677   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:17.933732   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:17.950153   77526 api_server.go:72] duration metric: took 6.768243331s to wait for apiserver process to appear ...
	I1011 22:29:17.950174   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:17.950192   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:29:17.953743   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:29:17.954586   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:17.954610   77526 api_server.go:131] duration metric: took 4.428307ms to wait for apiserver health ...
	I1011 22:29:17.954629   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:17.959411   77526 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:17.959432   77526 system_pods.go:61] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.959438   77526 system_pods.go:61] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.959443   77526 system_pods.go:61] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.959447   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.959451   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.959454   77526 system_pods.go:61] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.959457   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.959468   77526 system_pods.go:61] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.959473   77526 system_pods.go:61] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.959480   77526 system_pods.go:74] duration metric: took 4.84106ms to wait for pod list to return data ...
	I1011 22:29:17.959488   77526 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:17.962273   77526 default_sa.go:45] found service account: "default"
	I1011 22:29:17.962294   77526 default_sa.go:55] duration metric: took 2.80012ms for default service account to be created ...
	I1011 22:29:17.962302   77526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:17.966653   77526 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:17.966675   77526 system_pods.go:89] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.966681   77526 system_pods.go:89] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.966686   77526 system_pods.go:89] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.966691   77526 system_pods.go:89] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.966695   77526 system_pods.go:89] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.966698   77526 system_pods.go:89] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.966702   77526 system_pods.go:89] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.966741   77526 system_pods.go:89] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.966751   77526 system_pods.go:89] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.966759   77526 system_pods.go:126] duration metric: took 4.452902ms to wait for k8s-apps to be running ...
	I1011 22:29:17.966766   77526 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:17.966807   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:17.982751   77526 system_svc.go:56] duration metric: took 15.979158ms WaitForService to wait for kubelet
	I1011 22:29:17.982770   77526 kubeadm.go:582] duration metric: took 6.800865436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:17.982788   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:17.985340   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:17.985361   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:17.985373   77526 node_conditions.go:105] duration metric: took 2.578879ms to run NodePressure ...
	I1011 22:29:17.985385   77526 start.go:241] waiting for startup goroutines ...
	I1011 22:29:17.985398   77526 start.go:246] waiting for cluster config update ...
	I1011 22:29:17.985415   77526 start.go:255] writing updated cluster config ...
	I1011 22:29:17.985668   77526 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:18.034091   77526 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:18.036159   77526 out.go:177] * Done! kubectl is now configured to use "embed-certs-223942" cluster and "default" namespace by default
	I1011 22:29:19.671974   77741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225955809s)
	I1011 22:29:19.672048   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:19.689229   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:29:19.701141   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:29:19.714596   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:29:19.714630   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:29:19.714674   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:29:19.729207   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:29:19.729273   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:29:19.739052   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:29:19.748101   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:29:19.748162   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:29:19.757518   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.766689   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:29:19.766754   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.776197   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:29:19.785329   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:29:19.785381   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:29:19.794742   77741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:29:19.837814   77741 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:29:19.837936   77741 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:29:19.956401   77741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:29:19.956502   77741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:29:19.956574   77741 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:29:19.965603   77741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:29:19.967637   77741 out.go:235]   - Generating certificates and keys ...
	I1011 22:29:19.967726   77741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:29:19.967793   77741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:29:19.967875   77741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:29:19.967965   77741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:29:19.968066   77741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:29:19.968139   77741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:29:19.968224   77741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:29:19.968319   77741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:29:19.968435   77741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:29:19.968545   77741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:29:19.968608   77741 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:29:19.968701   77741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:29:20.266256   77741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:29:20.353124   77741 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:29:20.693912   77741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:29:20.814227   77741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:29:21.028714   77741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:29:21.029382   77741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:29:21.032065   77741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:29:19.149346   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.648583   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.033900   77741 out.go:235]   - Booting up control plane ...
	I1011 22:29:21.034020   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:29:21.034134   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:29:21.034236   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:29:21.053259   77741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:29:21.060157   77741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:29:21.060229   77741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:29:21.190140   77741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:29:21.190325   77741 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:21.691954   77741 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78398ms
	I1011 22:29:21.692069   77741 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:29:26.696518   77741 kubeadm.go:310] [api-check] The API server is healthy after 5.002229227s
	I1011 22:29:26.710581   77741 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:26.726686   77741 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:26.759596   77741 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:26.759894   77741 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-070708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:26.769529   77741 kubeadm.go:310] [bootstrap-token] Using token: dhosfn.441jcramrxgiydi4
	I1011 22:29:24.149380   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.647490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.770660   77741 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:26.770801   77741 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:26.775859   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:26.783572   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:26.789736   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:26.793026   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:26.797814   77741 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:27.102055   77741 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:27.537636   77741 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:28.102099   77741 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:28.103130   77741 kubeadm.go:310] 
	I1011 22:29:28.103241   77741 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:28.103264   77741 kubeadm.go:310] 
	I1011 22:29:28.103371   77741 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:28.103379   77741 kubeadm.go:310] 
	I1011 22:29:28.103400   77741 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:28.103454   77741 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:28.103506   77741 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:28.103510   77741 kubeadm.go:310] 
	I1011 22:29:28.103565   77741 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:28.103569   77741 kubeadm.go:310] 
	I1011 22:29:28.103618   77741 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:28.103624   77741 kubeadm.go:310] 
	I1011 22:29:28.103666   77741 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:28.103778   77741 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:28.103874   77741 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:28.103882   77741 kubeadm.go:310] 
	I1011 22:29:28.103960   77741 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:28.104023   77741 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:28.104029   77741 kubeadm.go:310] 
	I1011 22:29:28.104096   77741 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104179   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:28.104199   77741 kubeadm.go:310] 	--control-plane 
	I1011 22:29:28.104205   77741 kubeadm.go:310] 
	I1011 22:29:28.104271   77741 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:28.104277   77741 kubeadm.go:310] 
	I1011 22:29:28.104384   77741 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104513   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:28.105322   77741 kubeadm.go:310] W1011 22:29:19.811300    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105623   77741 kubeadm.go:310] W1011 22:29:19.812133    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105772   77741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:28.105796   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:29:28.105808   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:28.107671   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:28.108911   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:28.121190   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:28.143442   77741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:28.143523   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.143537   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-070708 minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=default-k8s-diff-port-070708 minikube.k8s.io/primary=true
	I1011 22:29:28.380171   77741 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:28.380244   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.649448   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:31.147882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:28.880541   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.380686   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.880953   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.381236   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.880946   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.380516   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.880841   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.380874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.880874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.969809   77741 kubeadm.go:1113] duration metric: took 4.826361525s to wait for elevateKubeSystemPrivileges
	I1011 22:29:32.969844   77741 kubeadm.go:394] duration metric: took 5m3.206576288s to StartCluster
	I1011 22:29:32.969864   77741 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.969949   77741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:32.972053   77741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.972321   77741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:32.972419   77741 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:32.972545   77741 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972564   77741 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972572   77741 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:32.972580   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:32.972577   77741 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972601   77741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-070708"
	I1011 22:29:32.972590   77741 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972621   77741 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972631   77741 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:32.972676   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972605   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972952   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.972982   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973051   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973088   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973111   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973143   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973995   77741 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:32.975387   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:32.989010   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1011 22:29:32.989449   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.989866   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I1011 22:29:32.990100   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990127   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.990213   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.990478   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.990668   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990692   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.991068   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991071   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.991110   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1011 22:29:32.991671   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991703   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991966   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.992453   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.992486   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.992808   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.992950   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:32.995986   77741 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.996004   77741 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:32.996031   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.996271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.996311   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.010650   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I1011 22:29:33.010949   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1011 22:29:33.011111   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011350   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I1011 22:29:33.011490   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.011509   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.011838   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011936   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012113   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.012272   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012283   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.012338   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.012663   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012877   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012897   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.013271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:33.013307   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.013511   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.013691   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.014538   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.015400   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.016387   77741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:33.017187   77741 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:33.018090   77741 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.018111   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:33.018130   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.018972   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:33.018994   77741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:33.019015   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.021827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022205   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.022226   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.022513   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.022704   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.022865   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.023070   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023552   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.023574   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.024067   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.024222   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.024376   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.030089   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I1011 22:29:33.030477   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.030929   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.030954   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.031352   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.031571   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.033098   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.033335   77741 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.033351   77741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:33.033366   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.036390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.036758   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.036780   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.037025   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.037173   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.037322   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.037467   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.201955   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:33.220870   77741 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229595   77741 node_ready.go:49] node "default-k8s-diff-port-070708" has status "Ready":"True"
	I1011 22:29:33.229615   77741 node_ready.go:38] duration metric: took 8.713422ms for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229623   77741 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:33.237626   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:33.298146   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:33.298166   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:33.308268   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.320862   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.346501   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:33.346536   77741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:33.406404   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.406435   77741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:33.480527   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.629133   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629162   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.629545   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.629564   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.629565   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.629616   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629625   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.630896   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.630904   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.630918   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.636620   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.636640   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.636979   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.636989   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.637001   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305476   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305507   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.305773   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.305798   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305809   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305821   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.306123   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.306168   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.306128   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.756210   77741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.275647241s)
	I1011 22:29:34.756257   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756271   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756536   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756558   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756567   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756575   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756844   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756891   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756911   77741 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-070708"
	I1011 22:29:34.756872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.759057   77741 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1011 22:29:33.148846   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:35.649536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:34.760328   77741 addons.go:510] duration metric: took 1.787917365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1011 22:29:34.764676   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:34.764703   77741 pod_ready.go:82] duration metric: took 1.527054334s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:34.764716   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773717   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.773739   77741 pod_ready.go:82] duration metric: took 1.009014594s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773747   77741 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779537   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.779554   77741 pod_ready.go:82] duration metric: took 5.801388ms for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779562   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785272   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:36.785302   77741 pod_ready.go:82] duration metric: took 1.005732291s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785316   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:38.790774   77741 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.790257   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.790285   77741 pod_ready.go:82] duration metric: took 4.004960127s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.790298   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794434   77741 pod_ready.go:93] pod "kube-proxy-f5jxp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.794457   77741 pod_ready.go:82] duration metric: took 4.15174ms for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794468   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797928   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.797942   77741 pod_ready.go:82] duration metric: took 3.468527ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797949   77741 pod_ready.go:39] duration metric: took 7.568316879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:40.797960   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:40.798002   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:40.813652   77741 api_server.go:72] duration metric: took 7.841294422s to wait for apiserver process to appear ...
	I1011 22:29:40.813672   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:40.813689   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:29:40.817412   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:29:40.818090   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:40.818107   77741 api_server.go:131] duration metric: took 4.42852ms to wait for apiserver health ...
	I1011 22:29:40.818114   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:40.823188   77741 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:40.823213   77741 system_pods.go:61] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:40.823221   77741 system_pods.go:61] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:40.823227   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:40.823233   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:40.823248   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:40.823255   77741 system_pods.go:61] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:40.823263   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:40.823273   77741 system_pods.go:61] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:40.823284   77741 system_pods.go:61] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:40.823296   77741 system_pods.go:74] duration metric: took 5.17626ms to wait for pod list to return data ...
	I1011 22:29:40.823307   77741 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:40.825321   77741 default_sa.go:45] found service account: "default"
	I1011 22:29:40.825336   77741 default_sa.go:55] duration metric: took 2.021143ms for default service account to be created ...
	I1011 22:29:40.825342   77741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:41.026940   77741 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:41.026968   77741 system_pods.go:89] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:41.026973   77741 system_pods.go:89] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:41.026978   77741 system_pods.go:89] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:41.026982   77741 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:41.026985   77741 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:41.026989   77741 system_pods.go:89] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:41.026992   77741 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:41.026998   77741 system_pods.go:89] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:41.027001   77741 system_pods.go:89] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:41.027009   77741 system_pods.go:126] duration metric: took 201.663243ms to wait for k8s-apps to be running ...
	I1011 22:29:41.027026   77741 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:41.027069   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:41.042219   77741 system_svc.go:56] duration metric: took 15.183864ms WaitForService to wait for kubelet
	I1011 22:29:41.042245   77741 kubeadm.go:582] duration metric: took 8.069890136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:41.042260   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:41.224020   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:41.224044   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:41.224057   77741 node_conditions.go:105] duration metric: took 181.791827ms to run NodePressure ...
	I1011 22:29:41.224070   77741 start.go:241] waiting for startup goroutines ...
	I1011 22:29:41.224078   77741 start.go:246] waiting for cluster config update ...
	I1011 22:29:41.224091   77741 start.go:255] writing updated cluster config ...
	I1011 22:29:41.224324   77741 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:41.270922   77741 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:41.272826   77741 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-070708" cluster and "default" namespace by default
	I1011 22:29:38.149579   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.648994   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:41.642042   77373 pod_ready.go:82] duration metric: took 4m0.000063385s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	E1011 22:29:41.642084   77373 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 22:29:41.642099   77373 pod_ready.go:39] duration metric: took 4m11.989411916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:41.642124   77373 kubeadm.go:597] duration metric: took 4m19.037142189s to restartPrimaryControlPlane
	W1011 22:29:41.642171   77373 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:29:41.642194   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:08.331378   77373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.689152762s)
	I1011 22:30:08.331467   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:08.348300   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:30:08.359480   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:08.370317   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:08.370344   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:08.370400   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:08.381317   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:08.381392   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:08.392591   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:08.403628   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:08.403695   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:08.415304   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.425512   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:08.425585   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.436525   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:08.447575   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:08.447644   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:08.458910   77373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:08.507988   77373 kubeadm.go:310] W1011 22:30:08.465544    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.508469   77373 kubeadm.go:310] W1011 22:30:08.466388    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.640893   77373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:16.843613   77373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:30:16.843665   77373 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:16.843739   77373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:16.843849   77373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:16.843963   77373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:30:16.844020   77373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:16.845663   77373 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:16.845745   77373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:16.845804   77373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:16.845880   77373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:16.845929   77373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:16.845994   77373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:16.846041   77373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:16.846094   77373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:16.846145   77373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:16.846207   77373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:16.846272   77373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:16.846305   77373 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:16.846355   77373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:16.846402   77373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:16.846453   77373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:30:16.846503   77373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:16.846566   77373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:16.846663   77373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:16.846762   77373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:16.846845   77373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:16.848425   77373 out.go:235]   - Booting up control plane ...
	I1011 22:30:16.848538   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:16.848673   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:16.848787   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:16.848925   77373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:16.849039   77373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:16.849076   77373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:16.849210   77373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:30:16.849351   77373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:30:16.849437   77373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.393174ms
	I1011 22:30:16.849498   77373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:30:16.849550   77373 kubeadm.go:310] [api-check] The API server is healthy after 5.001429588s
	I1011 22:30:16.849648   77373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:30:16.849781   77373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:30:16.849869   77373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:30:16.850052   77373 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-390487 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:30:16.850110   77373 kubeadm.go:310] [bootstrap-token] Using token: fihl2i.d50idwk2axnrw24u
	I1011 22:30:16.851665   77373 out.go:235]   - Configuring RBAC rules ...
	I1011 22:30:16.851802   77373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:30:16.851885   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:30:16.852036   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:30:16.852185   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:30:16.852323   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:30:16.852402   77373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:30:16.852499   77373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:30:16.852541   77373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:30:16.852580   77373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:30:16.852586   77373 kubeadm.go:310] 
	I1011 22:30:16.852634   77373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:30:16.852640   77373 kubeadm.go:310] 
	I1011 22:30:16.852705   77373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:30:16.852711   77373 kubeadm.go:310] 
	I1011 22:30:16.852732   77373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:30:16.852805   77373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:30:16.852878   77373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:30:16.852891   77373 kubeadm.go:310] 
	I1011 22:30:16.852990   77373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:30:16.853005   77373 kubeadm.go:310] 
	I1011 22:30:16.853073   77373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:30:16.853086   77373 kubeadm.go:310] 
	I1011 22:30:16.853162   77373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:30:16.853282   77373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:30:16.853341   77373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:30:16.853347   77373 kubeadm.go:310] 
	I1011 22:30:16.853424   77373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:30:16.853529   77373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:30:16.853540   77373 kubeadm.go:310] 
	I1011 22:30:16.853643   77373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.853789   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:30:16.853824   77373 kubeadm.go:310] 	--control-plane 
	I1011 22:30:16.853832   77373 kubeadm.go:310] 
	I1011 22:30:16.853954   77373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:30:16.853964   77373 kubeadm.go:310] 
	I1011 22:30:16.854083   77373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.854248   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:30:16.854264   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:30:16.854273   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:30:16.855848   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:30:16.857089   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:30:16.868823   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:30:16.895913   77373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:30:16.896017   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:16.896028   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-390487 minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=no-preload-390487 minikube.k8s.io/primary=true
	I1011 22:30:16.918531   77373 ops.go:34] apiserver oom_adj: -16
	I1011 22:30:17.097050   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:17.598029   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:18.098092   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:18.597526   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.098157   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.597575   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.097754   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.597957   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.097558   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.213123   77373 kubeadm.go:1113] duration metric: took 4.317171517s to wait for elevateKubeSystemPrivileges
	I1011 22:30:21.213168   77373 kubeadm.go:394] duration metric: took 4m58.664336163s to StartCluster
	I1011 22:30:21.213191   77373 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.213283   77373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:30:21.215630   77373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.215852   77373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:30:21.215989   77373 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:30:21.216063   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:30:21.216088   77373 addons.go:69] Setting storage-provisioner=true in profile "no-preload-390487"
	I1011 22:30:21.216109   77373 addons.go:234] Setting addon storage-provisioner=true in "no-preload-390487"
	I1011 22:30:21.216102   77373 addons.go:69] Setting default-storageclass=true in profile "no-preload-390487"
	W1011 22:30:21.216118   77373 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:30:21.216128   77373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-390487"
	I1011 22:30:21.216131   77373 addons.go:69] Setting metrics-server=true in profile "no-preload-390487"
	I1011 22:30:21.216171   77373 addons.go:234] Setting addon metrics-server=true in "no-preload-390487"
	W1011 22:30:21.216182   77373 addons.go:243] addon metrics-server should already be in state true
	I1011 22:30:21.216218   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216149   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216627   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216644   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216662   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216737   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.217280   77373 out.go:177] * Verifying Kubernetes components...
	I1011 22:30:21.218773   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:30:21.232485   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I1011 22:30:21.232801   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1011 22:30:21.233029   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233243   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233615   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233642   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233762   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233785   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233966   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234065   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234485   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234520   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.234611   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234669   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.235151   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1011 22:30:21.235614   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.236082   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.236106   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.236479   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.236777   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.240463   77373 addons.go:234] Setting addon default-storageclass=true in "no-preload-390487"
	W1011 22:30:21.240483   77373 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:30:21.240512   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.240874   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.240916   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.250949   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I1011 22:30:21.251469   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.251958   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.251983   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.252397   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.252586   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.253093   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1011 22:30:21.253443   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.253949   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.253966   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.254413   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.254479   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.254605   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.256241   77373 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:30:21.256246   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.257646   77373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:30:21.257651   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:30:21.257712   77373 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:30:21.257736   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.258740   77373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.258761   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:30:21.258779   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.259764   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1011 22:30:21.260129   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.260673   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.260697   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.261024   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.261691   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.261902   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.261949   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.262376   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.262401   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262655   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262698   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.262901   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263233   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.263339   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.263345   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263511   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.263523   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.263700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263807   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263942   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.302779   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1011 22:30:21.303319   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.303864   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.303888   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.304289   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.304516   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.306544   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.306810   77373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.306829   77373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:30:21.306852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.309788   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310242   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.310268   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310466   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.310646   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.310786   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.310911   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.439567   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:30:21.477421   77373 node_ready.go:35] waiting up to 6m0s for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.539701   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.544312   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.548001   77373 node_ready.go:49] node "no-preload-390487" has status "Ready":"True"
	I1011 22:30:21.548022   77373 node_ready.go:38] duration metric: took 70.568638ms for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.548032   77373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:21.576393   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:21.585171   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:30:21.585197   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:30:21.681671   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:30:21.681698   77373 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:30:21.725963   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:21.725988   77373 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:30:21.759564   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:22.490072   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490099   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490219   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490236   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490470   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490494   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490504   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490512   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490596   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490596   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490627   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490642   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490653   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490883   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490899   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490922   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490981   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490996   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.491008   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.509939   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.509972   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.510355   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.510371   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.510421   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:23.029621   77373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.270011552s)
	I1011 22:30:23.029675   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.029691   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.029972   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.029989   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.029999   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.030008   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.030228   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.030242   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.030253   77373 addons.go:475] Verifying addon metrics-server=true in "no-preload-390487"
	I1011 22:30:23.031821   77373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1011 22:30:23.033206   77373 addons.go:510] duration metric: took 1.817229636s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1011 22:30:23.583317   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.583341   77373 pod_ready.go:82] duration metric: took 2.006915507s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.583350   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588077   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.588094   77373 pod_ready.go:82] duration metric: took 4.738751ms for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588103   77373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592411   77373 pod_ready.go:93] pod "etcd-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.592429   77373 pod_ready.go:82] duration metric: took 4.320594ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592437   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:25.599226   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:28.107173   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:29.598395   77373 pod_ready.go:93] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.598422   77373 pod_ready.go:82] duration metric: took 6.005976584s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.598438   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603104   77373 pod_ready.go:93] pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.603123   77373 pod_ready.go:82] duration metric: took 4.67757ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603133   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606558   77373 pod_ready.go:93] pod "kube-proxy-4g8nw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.606574   77373 pod_ready.go:82] duration metric: took 3.433207ms for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606582   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610559   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.610575   77373 pod_ready.go:82] duration metric: took 3.985639ms for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610582   77373 pod_ready.go:39] duration metric: took 8.062539556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:29.610598   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:30:29.610667   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:30:29.625884   77373 api_server.go:72] duration metric: took 8.409998013s to wait for apiserver process to appear ...
	I1011 22:30:29.625906   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:30:29.625925   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:30:29.629905   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:30:29.631557   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:30:29.631575   77373 api_server.go:131] duration metric: took 5.661997ms to wait for apiserver health ...
	I1011 22:30:29.631583   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:30:29.637936   77373 system_pods.go:59] 9 kube-system pods found
	I1011 22:30:29.637963   77373 system_pods.go:61] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.637970   77373 system_pods.go:61] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.637974   77373 system_pods.go:61] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.637979   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.637984   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.637989   77373 system_pods.go:61] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.637997   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.638010   77373 system_pods.go:61] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.638018   77373 system_pods.go:61] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.638027   77373 system_pods.go:74] duration metric: took 6.437989ms to wait for pod list to return data ...
	I1011 22:30:29.638034   77373 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:30:29.640483   77373 default_sa.go:45] found service account: "default"
	I1011 22:30:29.640499   77373 default_sa.go:55] duration metric: took 2.455351ms for default service account to be created ...
	I1011 22:30:29.640508   77373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:30:29.800014   77373 system_pods.go:86] 9 kube-system pods found
	I1011 22:30:29.800043   77373 system_pods.go:89] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.800049   77373 system_pods.go:89] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.800053   77373 system_pods.go:89] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.800057   77373 system_pods.go:89] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.800060   77373 system_pods.go:89] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.800064   77373 system_pods.go:89] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.800069   77373 system_pods.go:89] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.800074   77373 system_pods.go:89] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.800078   77373 system_pods.go:89] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.800086   77373 system_pods.go:126] duration metric: took 159.572896ms to wait for k8s-apps to be running ...
	I1011 22:30:29.800093   77373 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:30:29.800138   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:29.815064   77373 system_svc.go:56] duration metric: took 14.962996ms WaitForService to wait for kubelet
	I1011 22:30:29.815090   77373 kubeadm.go:582] duration metric: took 8.599206932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:30:29.815106   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:30:29.997185   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:30:29.997214   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:30:29.997224   77373 node_conditions.go:105] duration metric: took 182.114064ms to run NodePressure ...
	I1011 22:30:29.997235   77373 start.go:241] waiting for startup goroutines ...
	I1011 22:30:29.997242   77373 start.go:246] waiting for cluster config update ...
	I1011 22:30:29.997254   77373 start.go:255] writing updated cluster config ...
	I1011 22:30:29.997529   77373 ssh_runner.go:195] Run: rm -f paused
	I1011 22:30:30.044917   77373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:30:30.046918   77373 out.go:177] * Done! kubectl is now configured to use "no-preload-390487" cluster and "default" namespace by default
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 
	
	
	==> CRI-O <==
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.894923924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686371894899193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71ba8760-3869-4866-87a3-03a4df183abf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.895575494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c2fce4d-e41b-4e13-9332-5cba897c0e9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.895723441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c2fce4d-e41b-4e13-9332-5cba897c0e9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.895987608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c2fce4d-e41b-4e13-9332-5cba897c0e9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.931556154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48ada032-373e-4527-a1bf-3182e9f16a6b name=/runtime.v1.RuntimeService/Version
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.931632649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48ada032-373e-4527-a1bf-3182e9f16a6b name=/runtime.v1.RuntimeService/Version
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.932931719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d22fd5e7-70ec-4e7e-bdaf-af1596813346 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.933279107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686371933257057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d22fd5e7-70ec-4e7e-bdaf-af1596813346 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.933707593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cdd5b47-abfc-44cb-9a7e-94f87bb2e6c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.933808116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cdd5b47-abfc-44cb-9a7e-94f87bb2e6c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.934038108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cdd5b47-abfc-44cb-9a7e-94f87bb2e6c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.971103646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85de6bab-84b4-4260-8fda-4e0d2d10cf46 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.971194530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85de6bab-84b4-4260-8fda-4e0d2d10cf46 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.972577180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbab69b2-ee0d-4332-9499-5c1bebe6d99f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.973056823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686371973026756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbab69b2-ee0d-4332-9499-5c1bebe6d99f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.973729528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=662eb674-3f9e-4f0c-bea4-234c19c57f48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.973823443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=662eb674-3f9e-4f0c-bea4-234c19c57f48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:31 no-preload-390487 crio[713]: time="2024-10-11 22:39:31.974017385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=662eb674-3f9e-4f0c-bea4-234c19c57f48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.011192539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71c9a8aa-6c53-4c0a-b7dd-77744a588974 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.011309493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71c9a8aa-6c53-4c0a-b7dd-77744a588974 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.012505461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9910459-afd2-4324-8cca-bef424672a72 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.012900334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686372012878025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9910459-afd2-4324-8cca-bef424672a72 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.014292535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca421189-c22d-4809-881d-2b784efcfd3f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.014378937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca421189-c22d-4809-881d-2b784efcfd3f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:39:32 no-preload-390487 crio[713]: time="2024-10-11 22:39:32.014580602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca421189-c22d-4809-881d-2b784efcfd3f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6096cd67128b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b8e3a7b6dbfdc       storage-provisioner
	08818b22b7103       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   90b87c4142d01       coredns-7c65d6cfc9-cpdng
	32b14e6aa5326       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   c18a08a146672       coredns-7c65d6cfc9-swwtf
	ebad1fa4ce2cd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   52c2634071218       kube-proxy-4g8nw
	d68b4e0d1d7ae       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   b72873bae5d0f       kube-controller-manager-no-preload-390487
	f8924a587a9eb       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   e89f75e2f30f8       kube-scheduler-no-preload-390487
	8293e0fb6f1b0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   ec3f725c59f83       etcd-no-preload-390487
	027364df9cdb4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   c30579031e2f2       kube-apiserver-no-preload-390487
	358e33d06a269       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   2c36b8febf83d       kube-apiserver-no-preload-390487
	
	
	==> coredns [08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-390487
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-390487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=no-preload-390487
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:30:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-390487
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:39:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:35:32 +0000   Fri, 11 Oct 2024 22:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:35:32 +0000   Fri, 11 Oct 2024 22:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:35:32 +0000   Fri, 11 Oct 2024 22:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:35:32 +0000   Fri, 11 Oct 2024 22:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.55
	  Hostname:    no-preload-390487
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e2a509ed2ba444bb34648704e214638
	  System UUID:                9e2a509e-d2ba-444b-b346-48704e214638
	  Boot ID:                    14dc90eb-55c0-46fe-a428-0609dc730585
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-cpdng                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-swwtf                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-no-preload-390487                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-390487             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-no-preload-390487    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-4g8nw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-390487             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-26g42              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node no-preload-390487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node no-preload-390487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node no-preload-390487 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node no-preload-390487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node no-preload-390487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node no-preload-390487 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node no-preload-390487 event: Registered Node no-preload-390487 in Controller
	
	
	==> dmesg <==
	[  +0.055612] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.231389] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.617200] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600324] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct11 22:25] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.056501] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063601] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.224336] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.136968] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.295648] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.392621] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.064861] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.990524] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +3.421036] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.239893] kauditd_printk_skb: 86 callbacks suppressed
	[Oct11 22:30] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.518058] systemd-fstab-generator[3084]: Ignoring "noauto" option for root device
	[  +4.425541] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.652465] systemd-fstab-generator[3409]: Ignoring "noauto" option for root device
	[  +5.393908] systemd-fstab-generator[3541]: Ignoring "noauto" option for root device
	[  +0.122289] kauditd_printk_skb: 14 callbacks suppressed
	[Oct11 22:31] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f] <==
	{"level":"info","ts":"2024-10-11T22:30:11.267116Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T22:30:11.267378Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"34f12fd29de7a73f","initial-advertise-peer-urls":["https://192.168.61.55:2380"],"listen-peer-urls":["https://192.168.61.55:2380"],"advertise-client-urls":["https://192.168.61.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:30:11.267426Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:30:11.267510Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.55:2380"}
	{"level":"info","ts":"2024-10-11T22:30:11.267543Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.55:2380"}
	{"level":"info","ts":"2024-10-11T22:30:11.510898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-11T22:30:11.511039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-11T22:30:11.511156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f received MsgPreVoteResp from 34f12fd29de7a73f at term 1"}
	{"level":"info","ts":"2024-10-11T22:30:11.511290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f became candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.511389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f received MsgVoteResp from 34f12fd29de7a73f at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.511423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f became leader at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.511506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 34f12fd29de7a73f elected leader 34f12fd29de7a73f at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.515990Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"34f12fd29de7a73f","local-member-attributes":"{Name:no-preload-390487 ClientURLs:[https://192.168.61.55:2379]}","request-path":"/0/members/34f12fd29de7a73f/attributes","cluster-id":"d57be02f73e7047c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:30:11.516260Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.516786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:30:11.517155Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:30:11.525964Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:30:11.526711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:30:11.532327Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d57be02f73e7047c","local-member-id":"34f12fd29de7a73f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.536950Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.529416Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:30:11.532803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:30:11.537864Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:30:11.537928Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.541955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.55:2379"}
	
	
	==> kernel <==
	 22:39:32 up 14 min,  0 users,  load average: 0.08, 0.16, 0.11
	Linux no-preload-390487 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc] <==
	W1011 22:35:14.608680       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:35:14.608826       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:35:14.609808       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:35:14.609872       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:36:14.610896       1 handler_proxy.go:99] no RequestInfo found in the context
	W1011 22:36:14.611149       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:36:14.611283       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1011 22:36:14.611233       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:36:14.612475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:36:14.612564       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:38:14.613199       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:38:14.613289       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:38:14.613250       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:38:14.613578       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:38:14.614449       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:38:14.615604       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591] <==
	W1011 22:30:04.948547       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:04.960827       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.015560       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.050642       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.052119       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.057552       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.072173       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.072204       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.184930       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.186464       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.200223       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.227475       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.280263       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.341372       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.351045       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.395901       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.410188       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.410410       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.441117       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.476677       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.481609       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.493282       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.589846       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.697489       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.742145       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209] <==
	E1011 22:34:20.442809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:34:21.009472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:34:50.449570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:34:51.017339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:35:20.457564       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:35:21.024995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:35:32.271898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-390487"
	E1011 22:35:50.463951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:35:51.034012       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:36:20.470741       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:36:21.043689       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:36:24.237269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.501µs"
	I1011 22:36:35.234135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="199.183µs"
	E1011 22:36:50.477613       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:36:51.051670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:37:20.484968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:37:21.060351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:37:50.491130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:37:51.067725       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:38:20.498574       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:38:21.078931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:38:50.505431       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:38:51.087206       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:39:20.511493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:39:21.094821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:30:22.260844       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:30:22.275717       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.55"]
	E1011 22:30:22.275830       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:30:22.478822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:30:22.478871       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:30:22.478916       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:30:22.481830       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:30:22.482053       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:30:22.482064       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:30:22.486716       1 config.go:199] "Starting service config controller"
	I1011 22:30:22.486798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:30:22.486824       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:30:22.486828       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:30:22.487176       1 config.go:328] "Starting node config controller"
	I1011 22:30:22.487283       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:30:22.587474       1 shared_informer.go:320] Caches are synced for node config
	I1011 22:30:22.587501       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:30:22.587520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27] <==
	W1011 22:30:13.622025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:30:13.622062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.504414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 22:30:14.504448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.507036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 22:30:14.507099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.601046       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:30:14.601237       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 22:30:14.701021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:30:14.701076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.716948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:30:14.717003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.805940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:30:14.806128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.810588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 22:30:14.810703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.823575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 22:30:14.823838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.825645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 22:30:14.825726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.842318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:30:14.842548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.897665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:30:14.897870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1011 22:30:16.501367       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:38:21 no-preload-390487 kubelet[3415]: E1011 22:38:21.218929    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:38:26 no-preload-390487 kubelet[3415]: E1011 22:38:26.374435    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686306374006040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:26 no-preload-390487 kubelet[3415]: E1011 22:38:26.374476    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686306374006040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:35 no-preload-390487 kubelet[3415]: E1011 22:38:35.219618    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:38:36 no-preload-390487 kubelet[3415]: E1011 22:38:36.375598    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686316375388466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:36 no-preload-390487 kubelet[3415]: E1011 22:38:36.375642    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686316375388466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:46 no-preload-390487 kubelet[3415]: E1011 22:38:46.377855    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686326377465394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:46 no-preload-390487 kubelet[3415]: E1011 22:38:46.378134    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686326377465394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:47 no-preload-390487 kubelet[3415]: E1011 22:38:47.218839    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:38:56 no-preload-390487 kubelet[3415]: E1011 22:38:56.382549    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686336382191439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:38:56 no-preload-390487 kubelet[3415]: E1011 22:38:56.383060    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686336382191439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:39:01 no-preload-390487 kubelet[3415]: E1011 22:39:01.219415    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:39:06 no-preload-390487 kubelet[3415]: E1011 22:39:06.385040    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686346384816871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:39:06 no-preload-390487 kubelet[3415]: E1011 22:39:06.385176    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686346384816871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:39:13 no-preload-390487 kubelet[3415]: E1011 22:39:13.218805    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]: E1011 22:39:16.282580    3415 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]: E1011 22:39:16.386592    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686356386275092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:39:16 no-preload-390487 kubelet[3415]: E1011 22:39:16.386620    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686356386275092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:39:24 no-preload-390487 kubelet[3415]: E1011 22:39:24.219132    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:39:26 no-preload-390487 kubelet[3415]: E1011 22:39:26.388297    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686366388046302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:39:26 no-preload-390487 kubelet[3415]: E1011 22:39:26.388341    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686366388046302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5] <==
	I1011 22:30:23.268995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:30:23.296576       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:30:23.296635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:30:23.305119       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:30:23.306946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-390487_e147461c-90ee-47d9-a237-d5e1a6e23ff5!
	I1011 22:30:23.307921       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0713c922-9daa-49bc-83dd-f068c0a969c9", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-390487_e147461c-90ee-47d9-a237-d5e1a6e23ff5 became leader
	I1011 22:30:23.408096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-390487_e147461c-90ee-47d9-a237-d5e1a6e23ff5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-390487 -n no-preload-390487
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-390487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-26g42
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-390487 describe pod metrics-server-6867b74b74-26g42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-390487 describe pod metrics-server-6867b74b74-26g42: exit status 1 (62.094763ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-26g42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-390487 describe pod metrics-server-6867b74b74-26g42: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:33:10.342971   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:33:19.016413   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:33:20.679633   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:33:33.029122   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:34:08.721445   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:34:23.068094   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:34:33.404719   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:34:56.091870   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:35:09.465890   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:35:11.426768   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:35:24.492653   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:35:31.786487   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:35:46.131397   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:36:34.490508   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:36:55.952516   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:36:57.615029   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:37:06.383473   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:38:10.343109   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:38:33.028829   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:39:08.720569   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:39:23.067348   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:40:11.426776   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:40:24.492902   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (223.270385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-323416" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (219.881076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-323416 logs -n 25
E1011 22:41:55.953519   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-323416 logs -n 25: (1.439913344s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo cat                              | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:20:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:39.118864   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:20:45.198865   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:48.270849   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:54.350871   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:57.422868   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:03.502801   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:06.574950   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:12.654900   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:15.726940   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:21.806892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:24.878947   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:30.958903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:34.030961   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:40.110909   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:43.182869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:49.262857   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:52.334903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:58.414892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:01.486914   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:07.566885   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:10.638888   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:16.718908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:19.790874   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:25.870893   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:28.942886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:35.022875   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:38.094889   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:44.174898   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:47.246907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:53.326869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:56.398883   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:02.482839   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:05.550858   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:11.630908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:14.702895   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:20.782925   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:23.854907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:29.934886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:33.006820   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:39.086906   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:42.158938   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:45.162974   77526 start.go:364] duration metric: took 4m27.722613931s to acquireMachinesLock for "embed-certs-223942"
	I1011 22:23:45.163058   77526 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:23:45.163081   77526 fix.go:54] fixHost starting: 
	I1011 22:23:45.163410   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:23:45.163459   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:23:45.178675   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1011 22:23:45.179157   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:23:45.179600   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:23:45.179620   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:23:45.179959   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:23:45.180200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:23:45.180348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:23:45.182134   77526 fix.go:112] recreateIfNeeded on embed-certs-223942: state=Stopped err=<nil>
	I1011 22:23:45.182159   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	W1011 22:23:45.182305   77526 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:23:45.184160   77526 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223942" ...
	I1011 22:23:45.185640   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Start
	I1011 22:23:45.185844   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring networks are active...
	I1011 22:23:45.186700   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network default is active
	I1011 22:23:45.187125   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network mk-embed-certs-223942 is active
	I1011 22:23:45.187499   77526 main.go:141] libmachine: (embed-certs-223942) Getting domain xml...
	I1011 22:23:45.188220   77526 main.go:141] libmachine: (embed-certs-223942) Creating domain...
	I1011 22:23:46.400681   77526 main.go:141] libmachine: (embed-certs-223942) Waiting to get IP...
	I1011 22:23:46.401694   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.402146   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.402226   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.402142   78768 retry.go:31] will retry after 262.164449ms: waiting for machine to come up
	I1011 22:23:46.665716   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.666177   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.666204   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.666139   78768 retry.go:31] will retry after 264.99316ms: waiting for machine to come up
	I1011 22:23:46.932771   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.933128   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.933167   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.933084   78768 retry.go:31] will retry after 388.243159ms: waiting for machine to come up
	I1011 22:23:47.322648   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.323103   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.323165   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.323047   78768 retry.go:31] will retry after 374.999199ms: waiting for machine to come up
	I1011 22:23:45.160618   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:23:45.160654   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.160935   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:23:45.160960   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.161145   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:23:45.162838   77373 machine.go:96] duration metric: took 4m37.426000052s to provisionDockerMachine
	I1011 22:23:45.162876   77373 fix.go:56] duration metric: took 4m37.446804874s for fixHost
	I1011 22:23:45.162886   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 4m37.446840276s
	W1011 22:23:45.162906   77373 start.go:714] error starting host: provision: host is not running
	W1011 22:23:45.163008   77373 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1011 22:23:45.163018   77373 start.go:729] Will try again in 5 seconds ...
	I1011 22:23:47.699684   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.700088   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.700117   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.700031   78768 retry.go:31] will retry after 589.703952ms: waiting for machine to come up
	I1011 22:23:48.291928   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.292398   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.292422   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.292351   78768 retry.go:31] will retry after 671.971303ms: waiting for machine to come up
	I1011 22:23:48.966357   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.966772   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.966797   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.966738   78768 retry.go:31] will retry after 848.2726ms: waiting for machine to come up
	I1011 22:23:49.816735   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:49.817155   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:49.817181   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:49.817116   78768 retry.go:31] will retry after 941.163438ms: waiting for machine to come up
	I1011 22:23:50.759625   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:50.760052   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:50.760095   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:50.759996   78768 retry.go:31] will retry after 1.225047114s: waiting for machine to come up
	I1011 22:23:51.987349   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:51.987788   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:51.987817   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:51.987737   78768 retry.go:31] will retry after 2.184212352s: waiting for machine to come up
	I1011 22:23:50.165493   77373 start.go:360] acquireMachinesLock for no-preload-390487: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:23:54.173125   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:54.173564   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:54.173595   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:54.173503   78768 retry.go:31] will retry after 2.000096312s: waiting for machine to come up
	I1011 22:23:56.176004   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:56.176458   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:56.176488   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:56.176403   78768 retry.go:31] will retry after 3.062345768s: waiting for machine to come up
	I1011 22:23:59.239982   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:59.240426   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:59.240452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:59.240386   78768 retry.go:31] will retry after 4.019746049s: waiting for machine to come up
	I1011 22:24:04.643399   77741 start.go:364] duration metric: took 4m21.087318573s to acquireMachinesLock for "default-k8s-diff-port-070708"
	I1011 22:24:04.643463   77741 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:04.643471   77741 fix.go:54] fixHost starting: 
	I1011 22:24:04.643903   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:04.643950   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:04.660647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1011 22:24:04.661106   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:04.661603   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:24:04.661627   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:04.661966   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:04.662148   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:04.662392   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:24:04.664004   77741 fix.go:112] recreateIfNeeded on default-k8s-diff-port-070708: state=Stopped err=<nil>
	I1011 22:24:04.664048   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	W1011 22:24:04.664205   77741 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:04.666462   77741 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-070708" ...
	I1011 22:24:03.263908   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264434   77526 main.go:141] libmachine: (embed-certs-223942) Found IP for machine: 192.168.72.238
	I1011 22:24:03.264467   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has current primary IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264476   77526 main.go:141] libmachine: (embed-certs-223942) Reserving static IP address...
	I1011 22:24:03.264932   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.264964   77526 main.go:141] libmachine: (embed-certs-223942) Reserved static IP address: 192.168.72.238
	I1011 22:24:03.264984   77526 main.go:141] libmachine: (embed-certs-223942) DBG | skip adding static IP to network mk-embed-certs-223942 - found existing host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"}
	I1011 22:24:03.264995   77526 main.go:141] libmachine: (embed-certs-223942) Waiting for SSH to be available...
	I1011 22:24:03.265018   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Getting to WaitForSSH function...
	I1011 22:24:03.267171   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267556   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.267594   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267682   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH client type: external
	I1011 22:24:03.267720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa (-rw-------)
	I1011 22:24:03.267747   77526 main.go:141] libmachine: (embed-certs-223942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:03.267760   77526 main.go:141] libmachine: (embed-certs-223942) DBG | About to run SSH command:
	I1011 22:24:03.267767   77526 main.go:141] libmachine: (embed-certs-223942) DBG | exit 0
	I1011 22:24:03.390641   77526 main.go:141] libmachine: (embed-certs-223942) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:03.390996   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetConfigRaw
	I1011 22:24:03.391600   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.393909   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394224   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.394267   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394510   77526 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:24:03.394735   77526 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:03.394754   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:03.394941   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.396974   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397280   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.397298   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397414   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.397577   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397724   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397856   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.398095   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.398276   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.398285   77526 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:03.503029   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:03.503063   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503282   77526 buildroot.go:166] provisioning hostname "embed-certs-223942"
	I1011 22:24:03.503301   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503503   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.505943   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506300   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.506325   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506444   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.506595   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506769   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506899   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.507087   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.507247   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.507261   77526 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223942 && echo "embed-certs-223942" | sudo tee /etc/hostname
	I1011 22:24:03.626937   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223942
	
	I1011 22:24:03.626970   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.629752   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630038   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.630067   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630194   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.630370   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630665   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.630805   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.630988   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.631011   77526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223942/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:03.744196   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:03.744224   77526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:03.744247   77526 buildroot.go:174] setting up certificates
	I1011 22:24:03.744258   77526 provision.go:84] configureAuth start
	I1011 22:24:03.744270   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.744535   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.747114   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.747479   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747619   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.750238   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750626   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.750662   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750801   77526 provision.go:143] copyHostCerts
	I1011 22:24:03.750867   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:03.750890   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:03.750970   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:03.751094   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:03.751108   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:03.751146   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:03.751246   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:03.751257   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:03.751288   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:03.751360   77526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223942 san=[127.0.0.1 192.168.72.238 embed-certs-223942 localhost minikube]
	I1011 22:24:04.039983   77526 provision.go:177] copyRemoteCerts
	I1011 22:24:04.040046   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:04.040072   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.042846   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043130   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.043151   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043339   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.043530   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.043689   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.043836   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.124533   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:04.148503   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:24:04.172199   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:04.195175   77526 provision.go:87] duration metric: took 450.888581ms to configureAuth
	I1011 22:24:04.195203   77526 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:04.195381   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:04.195446   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.197839   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198189   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.198269   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.198561   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198730   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198875   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.199041   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.199217   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.199237   77526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:04.411621   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:04.411653   77526 machine.go:96] duration metric: took 1.016905055s to provisionDockerMachine
	I1011 22:24:04.411667   77526 start.go:293] postStartSetup for "embed-certs-223942" (driver="kvm2")
	I1011 22:24:04.411680   77526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:04.411699   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.411977   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:04.412003   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.414381   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414679   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.414722   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414835   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.415010   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.415144   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.415266   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.496916   77526 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:04.500935   77526 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:04.500956   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:04.501023   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:04.501115   77526 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:04.501222   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:04.510266   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:04.537636   77526 start.go:296] duration metric: took 125.956397ms for postStartSetup
	I1011 22:24:04.537678   77526 fix.go:56] duration metric: took 19.374596283s for fixHost
	I1011 22:24:04.537698   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.540344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540719   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.540742   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540838   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.541012   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541160   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541316   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.541474   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.541648   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.541659   77526 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:04.643243   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685444.617606783
	
	I1011 22:24:04.643266   77526 fix.go:216] guest clock: 1728685444.617606783
	I1011 22:24:04.643273   77526 fix.go:229] Guest: 2024-10-11 22:24:04.617606783 +0000 UTC Remote: 2024-10-11 22:24:04.537682618 +0000 UTC m=+287.234553168 (delta=79.924165ms)
	I1011 22:24:04.643312   77526 fix.go:200] guest clock delta is within tolerance: 79.924165ms
	I1011 22:24:04.643320   77526 start.go:83] releasing machines lock for "embed-certs-223942", held for 19.480305529s
	I1011 22:24:04.643344   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.643569   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:04.646344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646733   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.646766   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646918   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647366   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647519   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647644   77526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:04.647693   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.647723   77526 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:04.647748   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.649992   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650329   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650354   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650378   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650509   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.650676   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.650750   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650773   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650857   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.650959   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.651027   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.651081   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.651200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.651313   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.756500   77526 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:04.762420   77526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:04.901155   77526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:04.908234   77526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:04.908304   77526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:04.929972   77526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:04.929999   77526 start.go:495] detecting cgroup driver to use...
	I1011 22:24:04.930069   77526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:04.946899   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:04.960670   77526 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:04.960739   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:04.973981   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:04.987444   77526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:05.103114   77526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:05.251587   77526 docker.go:233] disabling docker service ...
	I1011 22:24:05.251662   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:05.266087   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:05.279209   77526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:05.431467   77526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:05.571151   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:05.584813   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:05.603563   77526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:05.603632   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.614924   77526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:05.614979   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.627625   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.638259   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.651521   77526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:05.663937   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.674307   77526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.696935   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.707464   77526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:05.717338   77526 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:05.717416   77526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:05.737811   77526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:05.749453   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:05.888144   77526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:05.984321   77526 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:05.984382   77526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:05.989389   77526 start.go:563] Will wait 60s for crictl version
	I1011 22:24:05.989447   77526 ssh_runner.go:195] Run: which crictl
	I1011 22:24:05.993333   77526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:06.033281   77526 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:06.033366   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.062164   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.092927   77526 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:06.094094   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:06.097442   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.097893   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:06.097941   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.098179   77526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:06.102566   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:06.116183   77526 kubeadm.go:883] updating cluster {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:06.116297   77526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:06.116347   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:06.164193   77526 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:06.164272   77526 ssh_runner.go:195] Run: which lz4
	I1011 22:24:06.168557   77526 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:06.173131   77526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:06.173165   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:04.667909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Start
	I1011 22:24:04.668056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring networks are active...
	I1011 22:24:04.668688   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network default is active
	I1011 22:24:04.668985   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network mk-default-k8s-diff-port-070708 is active
	I1011 22:24:04.669312   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Getting domain xml...
	I1011 22:24:04.669964   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Creating domain...
	I1011 22:24:05.931094   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting to get IP...
	I1011 22:24:05.932142   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932635   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932711   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:05.932622   78901 retry.go:31] will retry after 199.659438ms: waiting for machine to come up
	I1011 22:24:06.134036   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134479   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134504   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.134439   78901 retry.go:31] will retry after 379.083732ms: waiting for machine to come up
	I1011 22:24:06.515118   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515656   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515686   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.515599   78901 retry.go:31] will retry after 302.733712ms: waiting for machine to come up
	I1011 22:24:06.820188   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820629   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820657   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.820579   78901 retry.go:31] will retry after 466.059846ms: waiting for machine to come up
	I1011 22:24:07.288837   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289371   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.289302   78901 retry.go:31] will retry after 551.760501ms: waiting for machine to come up
	I1011 22:24:07.843026   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843561   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843590   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.843517   78901 retry.go:31] will retry after 626.896356ms: waiting for machine to come up
	I1011 22:24:07.621882   77526 crio.go:462] duration metric: took 1.453355137s to copy over tarball
	I1011 22:24:07.621973   77526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:09.732789   77526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110786947s)
	I1011 22:24:09.732823   77526 crio.go:469] duration metric: took 2.110914695s to extract the tarball
	I1011 22:24:09.732831   77526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:09.768649   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:09.811856   77526 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:09.811881   77526 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:09.811890   77526 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I1011 22:24:09.811991   77526 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:09.812087   77526 ssh_runner.go:195] Run: crio config
	I1011 22:24:09.857847   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:09.857869   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:09.857877   77526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:09.857896   77526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223942 NodeName:embed-certs-223942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:09.858025   77526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:09.858082   77526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:09.868276   77526 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:09.868346   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:09.877682   77526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1011 22:24:09.894551   77526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:09.911181   77526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1011 22:24:09.927972   77526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:09.931799   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:09.943650   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:10.071890   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:10.089627   77526 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942 for IP: 192.168.72.238
	I1011 22:24:10.089658   77526 certs.go:194] generating shared ca certs ...
	I1011 22:24:10.089680   77526 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:10.089851   77526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:10.089905   77526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:10.089916   77526 certs.go:256] generating profile certs ...
	I1011 22:24:10.090038   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/client.key
	I1011 22:24:10.090121   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key.0dabc30d
	I1011 22:24:10.090163   77526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key
	I1011 22:24:10.090323   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:10.090354   77526 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:10.090364   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:10.090392   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:10.090415   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:10.090438   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:10.090476   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:10.091225   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:10.117879   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:10.169586   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:10.210385   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:10.245240   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:24:10.274354   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:10.299943   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:10.324265   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:10.347352   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:10.370252   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:10.393715   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:10.420103   77526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:10.436668   77526 ssh_runner.go:195] Run: openssl version
	I1011 22:24:10.442525   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:10.453055   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457461   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457520   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.463121   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:10.473623   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:10.483653   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488022   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488075   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.493553   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:10.503833   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:10.514171   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518935   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518983   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.524479   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:10.534942   77526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:10.539385   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:10.545178   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:10.550886   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:10.556533   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:10.562024   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:10.567514   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:10.573018   77526 kubeadm.go:392] StartCluster: {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:10.573136   77526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:10.573206   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.609216   77526 cri.go:89] found id: ""
	I1011 22:24:10.609291   77526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:10.619945   77526 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:10.619976   77526 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:10.620024   77526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:10.629748   77526 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:10.631292   77526 kubeconfig.go:125] found "embed-certs-223942" server: "https://192.168.72.238:8443"
	I1011 22:24:10.634516   77526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:10.644773   77526 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.238
	I1011 22:24:10.644805   77526 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:10.644821   77526 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:10.644874   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.680074   77526 cri.go:89] found id: ""
	I1011 22:24:10.680146   77526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:10.696118   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:10.705765   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:10.705789   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:10.705845   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:10.714771   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:10.714837   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:10.724255   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:10.733433   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:10.733490   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:10.742649   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.751287   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:10.751350   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.760572   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:10.769447   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:10.769517   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:10.778829   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:10.788208   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:10.900288   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.733461   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.929225   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.001383   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.093971   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:12.094053   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:08.471765   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472154   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472178   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:08.472099   78901 retry.go:31] will retry after 1.132732814s: waiting for machine to come up
	I1011 22:24:09.606499   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607030   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:09.606975   78901 retry.go:31] will retry after 1.289031778s: waiting for machine to come up
	I1011 22:24:10.897474   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.897980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.898005   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:10.897925   78901 retry.go:31] will retry after 1.601197893s: waiting for machine to come up
	I1011 22:24:12.500563   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501072   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501100   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:12.501018   78901 retry.go:31] will retry after 1.772496409s: waiting for machine to come up
	I1011 22:24:12.594492   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.094823   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.594502   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.095004   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.109230   77526 api_server.go:72] duration metric: took 2.015258789s to wait for apiserver process to appear ...
	I1011 22:24:14.109265   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:14.109291   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.439696   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.439731   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.439747   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.515797   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.515834   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.610033   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.620048   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:16.620093   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.109593   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.116698   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.116729   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.609486   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.628000   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.628031   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:18.109663   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:18.115996   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:24:18.121780   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:18.121806   77526 api_server.go:131] duration metric: took 4.012533784s to wait for apiserver health ...
	I1011 22:24:18.121816   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:18.121823   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:18.123838   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:14.275892   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276364   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:14.276305   78901 retry.go:31] will retry after 2.71082021s: waiting for machine to come up
	I1011 22:24:16.989033   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989560   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989591   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:16.989521   78901 retry.go:31] will retry after 2.502509628s: waiting for machine to come up
	I1011 22:24:18.125325   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:18.137257   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:18.154806   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:18.164291   77526 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:18.164318   77526 system_pods.go:61] "coredns-7c65d6cfc9-w8zgx" [4a8fab25-6b1a-424f-982c-2def533eb1ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:18.164325   77526 system_pods.go:61] "etcd-embed-certs-223942" [95c77be2-4ed2-45b5-b1ad-abbd3bc6de78] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:18.164332   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [51fd81a8-25e1-4d2f-b6dc-42e1b277de54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:18.164338   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [17eda746-891b-44aa-800c-fabd818db753] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:18.164357   77526 system_pods.go:61] "kube-proxy-xz284" [a24b20d5-45dd-476c-8c91-07fd5cea511b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:18.164368   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [91bf2256-7d6e-4831-aab5-d59c4f801fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:18.164382   77526 system_pods.go:61] "metrics-server-6867b74b74-9xr4k" [fc1a267e-3cb7-40f6-8908-5b304f8f5b92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:18.164398   77526 system_pods.go:61] "storage-provisioner" [77ed79d9-66ba-4262-a972-e23ce8d1878c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:18.164412   77526 system_pods.go:74] duration metric: took 9.584328ms to wait for pod list to return data ...
	I1011 22:24:18.164421   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:18.167630   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:18.167650   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:18.167660   77526 node_conditions.go:105] duration metric: took 3.235822ms to run NodePressure ...
	I1011 22:24:18.167675   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:18.453597   77526 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457919   77526 kubeadm.go:739] kubelet initialised
	I1011 22:24:18.457937   77526 kubeadm.go:740] duration metric: took 4.320725ms waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457944   77526 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:18.462432   77526 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.468402   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468426   77526 pod_ready.go:82] duration metric: took 5.974992ms for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.468435   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468441   77526 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.475031   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475048   77526 pod_ready.go:82] duration metric: took 6.600211ms for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.475056   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475061   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.479729   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479748   77526 pod_ready.go:82] duration metric: took 4.679509ms for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.479756   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479762   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:20.487624   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:19.494990   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495353   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495384   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:19.495311   78901 retry.go:31] will retry after 2.761894966s: waiting for machine to come up
	I1011 22:24:22.260471   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has current primary IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260931   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Found IP for machine: 192.168.39.162
	I1011 22:24:22.260960   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserving static IP address...
	I1011 22:24:22.261363   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserved static IP address: 192.168.39.162
	I1011 22:24:22.261401   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.261416   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for SSH to be available...
	I1011 22:24:22.261457   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | skip adding static IP to network mk-default-k8s-diff-port-070708 - found existing host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"}
	I1011 22:24:22.261493   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Getting to WaitForSSH function...
	I1011 22:24:22.263356   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263736   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.263769   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263912   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH client type: external
	I1011 22:24:22.263936   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa (-rw-------)
	I1011 22:24:22.263959   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:22.263975   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | About to run SSH command:
	I1011 22:24:22.263991   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | exit 0
	I1011 22:24:22.391349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:22.391744   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetConfigRaw
	I1011 22:24:22.392361   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.394582   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.394953   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.394987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.395205   77741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:24:22.395391   77741 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:22.395408   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:22.395620   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.397851   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398185   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.398215   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398339   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.398517   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398671   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398810   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.398947   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.399226   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.399243   77741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:22.506891   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:22.506929   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507220   77741 buildroot.go:166] provisioning hostname "default-k8s-diff-port-070708"
	I1011 22:24:22.507252   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507437   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.510300   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510694   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.510728   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510830   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.511016   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511165   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511449   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.511588   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.511783   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.511800   77741 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-070708 && echo "default-k8s-diff-port-070708" | sudo tee /etc/hostname
	I1011 22:24:22.632639   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-070708
	
	I1011 22:24:22.632673   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.635224   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635536   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.635570   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.635881   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636018   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636166   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.636312   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.636503   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.636521   77741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-070708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-070708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-070708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:22.751402   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:22.751434   77741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:22.751490   77741 buildroot.go:174] setting up certificates
	I1011 22:24:22.751505   77741 provision.go:84] configureAuth start
	I1011 22:24:22.751522   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.751753   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.754256   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754611   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.754661   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.756857   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757175   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.757207   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757327   77741 provision.go:143] copyHostCerts
	I1011 22:24:22.757384   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:22.757405   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:22.757479   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:22.757577   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:22.757586   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:22.757607   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:22.757660   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:22.757667   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:22.757683   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:22.757738   77741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-070708 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-070708 localhost minikube]
	I1011 22:24:23.136674   77741 provision.go:177] copyRemoteCerts
	I1011 22:24:23.136726   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:23.136751   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.139576   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.139909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.139939   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.140104   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.140302   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.140446   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.140553   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.224552   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:23.248389   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1011 22:24:23.271533   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:23.294727   77741 provision.go:87] duration metric: took 543.206381ms to configureAuth
	I1011 22:24:23.294757   77741 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:23.295005   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:23.295092   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.297776   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298066   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.298102   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298225   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.298447   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298609   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298747   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.298880   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.299054   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.299068   77741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.522759   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:23.522787   77741 machine.go:96] duration metric: took 1.127384391s to provisionDockerMachine
	I1011 22:24:23.522801   77741 start.go:293] postStartSetup for "default-k8s-diff-port-070708" (driver="kvm2")
	I1011 22:24:23.522814   77741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:23.522834   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.523149   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:23.523186   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.526415   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.526905   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.526927   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.527101   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.527304   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.527442   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.527548   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.609520   77741 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:23.614158   77741 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:23.614183   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:23.614257   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:23.614349   77741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:23.614460   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:23.623839   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:23.649574   77741 start.go:296] duration metric: took 126.758615ms for postStartSetup
	I1011 22:24:23.649619   77741 fix.go:56] duration metric: took 19.006146927s for fixHost
	I1011 22:24:23.649643   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.652832   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653204   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.653234   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653439   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.653633   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653815   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.654158   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.654348   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.654362   77741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:23.763396   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685463.735816087
	
	I1011 22:24:23.763417   77741 fix.go:216] guest clock: 1728685463.735816087
	I1011 22:24:23.763435   77741 fix.go:229] Guest: 2024-10-11 22:24:23.735816087 +0000 UTC Remote: 2024-10-11 22:24:23.649624165 +0000 UTC m=+280.235201903 (delta=86.191922ms)
	I1011 22:24:23.763454   77741 fix.go:200] guest clock delta is within tolerance: 86.191922ms
	I1011 22:24:23.763459   77741 start.go:83] releasing machines lock for "default-k8s-diff-port-070708", held for 19.120018362s
	I1011 22:24:23.763483   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.763750   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:23.766956   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767357   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.767399   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767553   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768140   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768301   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768388   77741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:23.768438   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.768496   77741 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:23.768518   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.771106   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771145   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771526   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771567   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771589   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771605   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771855   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.771901   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772053   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.772102   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.772171   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772276   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.883476   77741 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:23.889434   77741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:24.036410   77741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:24.042728   77741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:24.042805   77741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:24.059112   77741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:24.059137   77741 start.go:495] detecting cgroup driver to use...
	I1011 22:24:24.059201   77741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:24.075267   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:24.088163   77741 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:24.088228   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:24.106336   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:24.123084   77741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:24.242599   77741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:24.411075   77741 docker.go:233] disabling docker service ...
	I1011 22:24:24.411159   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:24.430632   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:24.447508   77741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:24.617156   77741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:24.761101   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:24.776604   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:24.799678   77741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:24.799738   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.811501   77741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:24.811576   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.822565   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.833103   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.843670   77741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:24.855800   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.868918   77741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.886996   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.897487   77741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:24.907215   77741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:24.907263   77741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:24.920391   77741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:24.931383   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:25.065929   77741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:25.164594   77741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:25.164663   77741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:25.169492   77741 start.go:563] Will wait 60s for crictl version
	I1011 22:24:25.169540   77741 ssh_runner.go:195] Run: which crictl
	I1011 22:24:25.173355   77741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:25.220778   77741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:25.220876   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.253354   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.287095   77741 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:22.488407   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:24.988742   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:25.288116   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:25.291001   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291345   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:25.291390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291579   77741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:25.295805   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:25.308821   77741 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:25.308959   77741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:25.309019   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:25.353205   77741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:25.353271   77741 ssh_runner.go:195] Run: which lz4
	I1011 22:24:25.357765   77741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:25.362126   77741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:25.362168   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:26.741249   77741 crio.go:462] duration metric: took 1.383506027s to copy over tarball
	I1011 22:24:26.741392   77741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:27.486887   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.486911   77526 pod_ready.go:82] duration metric: took 9.007140273s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.486926   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492698   77526 pod_ready.go:93] pod "kube-proxy-xz284" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.492717   77526 pod_ready.go:82] duration metric: took 5.784843ms for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492726   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:29.499666   77526 pod_ready.go:103] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:32.137260   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:32.137292   77526 pod_ready.go:82] duration metric: took 4.644558899s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:32.137307   77526 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:28.851248   77741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1098168s)
	I1011 22:24:28.851285   77741 crio.go:469] duration metric: took 2.109983801s to extract the tarball
	I1011 22:24:28.851294   77741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:28.888408   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:28.933361   77741 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:28.933384   77741 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:28.933391   77741 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.31.1 crio true true} ...
	I1011 22:24:28.933510   77741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-070708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:28.933589   77741 ssh_runner.go:195] Run: crio config
	I1011 22:24:28.982515   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:28.982541   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:28.982554   77741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:28.982582   77741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-070708 NodeName:default-k8s-diff-port-070708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:28.982781   77741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-070708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:28.982862   77741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:28.993780   77741 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:28.993846   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:29.005252   77741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1011 22:24:29.023922   77741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:29.042177   77741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 22:24:29.059529   77741 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:29.063600   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:29.078061   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:29.204249   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:29.221115   77741 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708 for IP: 192.168.39.162
	I1011 22:24:29.221141   77741 certs.go:194] generating shared ca certs ...
	I1011 22:24:29.221161   77741 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:29.221349   77741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:29.221402   77741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:29.221413   77741 certs.go:256] generating profile certs ...
	I1011 22:24:29.221493   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/client.key
	I1011 22:24:29.221568   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key.07f8f6d8
	I1011 22:24:29.221645   77741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key
	I1011 22:24:29.221767   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:29.221803   77741 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:29.221812   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:29.221832   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:29.221853   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:29.221872   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:29.221929   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:29.222760   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:29.262636   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:29.308886   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:29.348949   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:29.378795   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 22:24:29.426593   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:29.465414   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:29.491216   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:29.518262   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:29.542270   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:29.565664   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:29.588852   77741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:29.606630   77741 ssh_runner.go:195] Run: openssl version
	I1011 22:24:29.612594   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:29.623089   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627591   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627656   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.633544   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:29.644199   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:29.654783   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661009   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661061   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.668950   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:29.684757   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:29.700687   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705578   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705646   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.711533   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:29.722714   77741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:29.727419   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:29.733494   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:29.739565   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:29.745569   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:29.751428   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:29.757368   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:29.763272   77741 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:29.763379   77741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:29.763436   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.805191   77741 cri.go:89] found id: ""
	I1011 22:24:29.805263   77741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:29.819025   77741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:29.819049   77741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:29.819098   77741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:29.828470   77741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:29.829347   77741 kubeconfig.go:125] found "default-k8s-diff-port-070708" server: "https://192.168.39.162:8444"
	I1011 22:24:29.831385   77741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:29.840601   77741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1011 22:24:29.840630   77741 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:29.840640   77741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:29.840691   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.880123   77741 cri.go:89] found id: ""
	I1011 22:24:29.880199   77741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:29.897250   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:29.908273   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:29.908293   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:29.908340   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:24:29.917052   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:29.917110   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:29.926121   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:24:29.935494   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:29.935552   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:29.944951   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.953829   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:29.953890   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.963554   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:24:29.972917   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:29.972979   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:29.981962   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:29.990859   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.116668   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.856369   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.204973   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.261641   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.313332   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:31.313450   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:31.814503   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.313812   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.813821   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.833106   77741 api_server.go:72] duration metric: took 1.519770408s to wait for apiserver process to appear ...
	I1011 22:24:32.833142   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:32.833166   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.028524   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.028557   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.028573   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.035621   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.035651   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.334128   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.339051   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.339075   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:35.833305   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.838821   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.838851   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:36.333367   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:36.338371   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:24:36.344660   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:36.344684   77741 api_server.go:131] duration metric: took 3.511533712s to wait for apiserver health ...
	I1011 22:24:36.344694   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:36.344703   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:36.346229   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:34.148281   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:36.645574   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:36.347449   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:36.361278   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:36.384091   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:36.399422   77741 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:36.399482   77741 system_pods.go:61] "coredns-7c65d6cfc9-bpv5v" [76f03ec1-b826-412f-8bb2-fcd555185dd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:36.399503   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [5f021850-47af-442e-81f9-fccf153afb5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:36.399521   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [12777485-8206-495d-9223-06574b1410a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:36.399557   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [4261e9f7-6e66-44d3-abbb-6fd541e62c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:36.399567   77741 system_pods.go:61] "kube-proxy-hsjth" [7ba3e685-be57-4e46-ac49-279bd32ca049] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:36.399575   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [1d170237-0bbe-4832-b5d2-cea7a11d5aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:36.399585   77741 system_pods.go:61] "metrics-server-6867b74b74-l7xbw" [998853a5-4215-4f3d-baa5-84e8f6bb91ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:36.399599   77741 system_pods.go:61] "storage-provisioner" [f618ffde-9d3a-43fd-999a-3855ac5de5d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:36.399612   77741 system_pods.go:74] duration metric: took 15.498192ms to wait for pod list to return data ...
	I1011 22:24:36.399627   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:36.403628   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:36.403652   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:36.403663   77741 node_conditions.go:105] duration metric: took 4.030681ms to run NodePressure ...
	I1011 22:24:36.403677   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:36.705101   77741 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710495   77741 kubeadm.go:739] kubelet initialised
	I1011 22:24:36.710514   77741 kubeadm.go:740] duration metric: took 5.389006ms waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710521   77741 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:36.715511   77741 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:39.144299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.144365   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:38.723972   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.221423   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:43.222431   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.627428   77373 start.go:364] duration metric: took 54.46189221s to acquireMachinesLock for "no-preload-390487"
	I1011 22:24:44.627494   77373 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:44.627505   77373 fix.go:54] fixHost starting: 
	I1011 22:24:44.627904   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:44.627943   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:44.647097   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I1011 22:24:44.647594   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:44.648124   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:24:44.648149   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:44.648538   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:44.648719   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:24:44.648881   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:24:44.650660   77373 fix.go:112] recreateIfNeeded on no-preload-390487: state=Stopped err=<nil>
	I1011 22:24:44.650685   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	W1011 22:24:44.650829   77373 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:44.652887   77373 out.go:177] * Restarting existing kvm2 VM for "no-preload-390487" ...
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:43.144430   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:45.645398   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.654550   77373 main.go:141] libmachine: (no-preload-390487) Calling .Start
	I1011 22:24:44.654840   77373 main.go:141] libmachine: (no-preload-390487) Ensuring networks are active...
	I1011 22:24:44.655546   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network default is active
	I1011 22:24:44.656008   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network mk-no-preload-390487 is active
	I1011 22:24:44.656383   77373 main.go:141] libmachine: (no-preload-390487) Getting domain xml...
	I1011 22:24:44.657065   77373 main.go:141] libmachine: (no-preload-390487) Creating domain...
	I1011 22:24:45.980644   77373 main.go:141] libmachine: (no-preload-390487) Waiting to get IP...
	I1011 22:24:45.981635   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:45.982101   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:45.982167   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:45.982078   79243 retry.go:31] will retry after 195.443447ms: waiting for machine to come up
	I1011 22:24:46.179539   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.179999   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.180030   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.179953   79243 retry.go:31] will retry after 322.117828ms: waiting for machine to come up
	I1011 22:24:46.503434   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.503947   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.503969   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.503915   79243 retry.go:31] will retry after 295.160677ms: waiting for machine to come up
	I1011 22:24:46.801184   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.801763   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.801797   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.801716   79243 retry.go:31] will retry after 396.903731ms: waiting for machine to come up
	I1011 22:24:47.200047   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.200515   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.200543   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.200480   79243 retry.go:31] will retry after 750.816077ms: waiting for machine to come up
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:44.735539   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:44.735561   77741 pod_ready.go:82] duration metric: took 8.020026677s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:44.735576   77741 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:46.744354   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:48.144609   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:50.149053   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:47.952867   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.953464   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.953495   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.953288   79243 retry.go:31] will retry after 639.218351ms: waiting for machine to come up
	I1011 22:24:48.594034   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:48.594428   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:48.594484   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:48.594409   79243 retry.go:31] will retry after 884.81772ms: waiting for machine to come up
	I1011 22:24:49.480960   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:49.481335   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:49.481362   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:49.481290   79243 retry.go:31] will retry after 1.298501886s: waiting for machine to come up
	I1011 22:24:50.781446   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:50.781854   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:50.781878   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:50.781800   79243 retry.go:31] will retry after 1.856156849s: waiting for machine to come up
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:49.242663   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:51.875476   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:53.411915   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.411937   77741 pod_ready.go:82] duration metric: took 8.676353233s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.411950   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418808   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.418827   77741 pod_ready.go:82] duration metric: took 6.869777ms for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418838   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428224   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.428257   77741 pod_ready.go:82] duration metric: took 9.411307ms for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428270   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438263   77741 pod_ready.go:93] pod "kube-proxy-hsjth" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.438293   77741 pod_ready.go:82] duration metric: took 10.015779ms for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438307   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444909   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.444932   77741 pod_ready.go:82] duration metric: took 6.618233ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444943   77741 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:52.646299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:55.144236   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:52.640024   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:52.640568   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:52.640600   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:52.640516   79243 retry.go:31] will retry after 1.634063154s: waiting for machine to come up
	I1011 22:24:54.275779   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:54.276278   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:54.276307   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:54.276222   79243 retry.go:31] will retry after 2.141763066s: waiting for machine to come up
	I1011 22:24:56.419913   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:56.420312   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:56.420333   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:56.420279   79243 retry.go:31] will retry after 3.322852036s: waiting for machine to come up
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.451196   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.452254   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.645338   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:00.142994   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.147083   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:59.745010   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:59.745433   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:59.745457   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:59.745377   79243 retry.go:31] will retry after 4.379442156s: waiting for machine to come up
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.953903   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.451156   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:04.127900   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has current primary IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128566   77373 main.go:141] libmachine: (no-preload-390487) Found IP for machine: 192.168.61.55
	I1011 22:25:04.128581   77373 main.go:141] libmachine: (no-preload-390487) Reserving static IP address...
	I1011 22:25:04.129112   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.129144   77373 main.go:141] libmachine: (no-preload-390487) DBG | skip adding static IP to network mk-no-preload-390487 - found existing host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"}
	I1011 22:25:04.129157   77373 main.go:141] libmachine: (no-preload-390487) Reserved static IP address: 192.168.61.55
	I1011 22:25:04.129170   77373 main.go:141] libmachine: (no-preload-390487) Waiting for SSH to be available...
	I1011 22:25:04.129179   77373 main.go:141] libmachine: (no-preload-390487) DBG | Getting to WaitForSSH function...
	I1011 22:25:04.131402   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131668   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.131698   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131864   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH client type: external
	I1011 22:25:04.131892   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa (-rw-------)
	I1011 22:25:04.131922   77373 main.go:141] libmachine: (no-preload-390487) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:25:04.131936   77373 main.go:141] libmachine: (no-preload-390487) DBG | About to run SSH command:
	I1011 22:25:04.131950   77373 main.go:141] libmachine: (no-preload-390487) DBG | exit 0
	I1011 22:25:04.258578   77373 main.go:141] libmachine: (no-preload-390487) DBG | SSH cmd err, output: <nil>: 
	I1011 22:25:04.258971   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetConfigRaw
	I1011 22:25:04.259663   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.262128   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262510   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.262542   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262838   77373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:25:04.263066   77373 machine.go:93] provisionDockerMachine start ...
	I1011 22:25:04.263088   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:04.263316   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.265560   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.265843   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.265862   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.266086   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.266277   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266448   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266597   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.266755   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.266968   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.266982   77373 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:25:04.375270   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:25:04.375306   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375541   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:25:04.375564   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375718   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.378706   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379069   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.379091   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379315   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.379515   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.380026   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.380213   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.380224   77373 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-390487 && echo "no-preload-390487" | sudo tee /etc/hostname
	I1011 22:25:04.503359   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-390487
	
	I1011 22:25:04.503392   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.506163   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506502   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.506537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506742   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.506924   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507077   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507332   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.507483   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.507660   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.507676   77373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-390487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-390487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-390487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:25:04.624804   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:25:04.624850   77373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:25:04.624880   77373 buildroot.go:174] setting up certificates
	I1011 22:25:04.624893   77373 provision.go:84] configureAuth start
	I1011 22:25:04.624909   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.625190   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.627950   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628278   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.628320   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628458   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.630686   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631012   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.631040   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631168   77373 provision.go:143] copyHostCerts
	I1011 22:25:04.631234   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:25:04.631255   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:25:04.631328   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:25:04.631438   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:25:04.631450   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:25:04.631488   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:25:04.631564   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:25:04.631575   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:25:04.631600   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:25:04.631668   77373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.no-preload-390487 san=[127.0.0.1 192.168.61.55 localhost minikube no-preload-390487]
	I1011 22:25:04.736741   77373 provision.go:177] copyRemoteCerts
	I1011 22:25:04.736802   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:25:04.736830   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.739358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739665   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.739695   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.740016   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.740156   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.740291   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:04.826024   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:25:04.851100   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:25:04.875010   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:25:04.899107   77373 provision.go:87] duration metric: took 274.198948ms to configureAuth
	I1011 22:25:04.899133   77373 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:25:04.899323   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:25:04.899405   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.901744   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902079   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.902108   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902320   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.902518   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902689   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902911   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.903095   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.903284   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.903304   77373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:25:05.129377   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:25:05.129406   77373 machine.go:96] duration metric: took 866.326736ms to provisionDockerMachine
	I1011 22:25:05.129420   77373 start.go:293] postStartSetup for "no-preload-390487" (driver="kvm2")
	I1011 22:25:05.129435   77373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:25:05.129455   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.129768   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:25:05.129798   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.132216   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132539   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.132579   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132703   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.132891   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.133039   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.133177   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.219144   77373 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:25:05.223510   77373 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:25:05.223549   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:25:05.223634   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:25:05.223728   77373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:25:05.223837   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:25:05.234069   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:05.259266   77373 start.go:296] duration metric: took 129.829951ms for postStartSetup
	I1011 22:25:05.259313   77373 fix.go:56] duration metric: took 20.631808044s for fixHost
	I1011 22:25:05.259335   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.262071   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262313   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.262340   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262493   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.262702   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.262899   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.263030   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.263243   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:05.263425   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:05.263470   77373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:25:05.367341   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685505.320713090
	
	I1011 22:25:05.367368   77373 fix.go:216] guest clock: 1728685505.320713090
	I1011 22:25:05.367378   77373 fix.go:229] Guest: 2024-10-11 22:25:05.32071309 +0000 UTC Remote: 2024-10-11 22:25:05.259318089 +0000 UTC m=+357.684959787 (delta=61.395001ms)
	I1011 22:25:05.367397   77373 fix.go:200] guest clock delta is within tolerance: 61.395001ms
	I1011 22:25:05.367409   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 20.739943225s
	I1011 22:25:05.367428   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.367673   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:05.370276   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370611   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.370648   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370815   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371423   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371608   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371674   77373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:25:05.371726   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.371914   77373 ssh_runner.go:195] Run: cat /version.json
	I1011 22:25:05.371939   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.374358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374730   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.374764   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374794   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374915   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375073   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375227   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375232   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.375256   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.375342   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.375449   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375560   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375714   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375819   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.482886   77373 ssh_runner.go:195] Run: systemctl --version
	I1011 22:25:05.489351   77373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:25:05.643786   77373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:25:05.650229   77373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:25:05.650296   77373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:25:05.666494   77373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:25:05.666522   77373 start.go:495] detecting cgroup driver to use...
	I1011 22:25:05.666582   77373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:25:05.683659   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:25:05.697066   77373 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:25:05.697119   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:25:05.712780   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:25:05.728824   77373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:25:05.844693   77373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:25:06.021006   77373 docker.go:233] disabling docker service ...
	I1011 22:25:06.021064   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:25:06.035844   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:25:06.049585   77373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:25:06.194294   77373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:25:06.333778   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:25:06.349522   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:25:06.370214   77373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:25:06.370285   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.380680   77373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:25:06.380751   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.390974   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.402539   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.414129   77373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:25:06.425521   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.435647   77373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.453454   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.463564   77373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:25:06.473487   77373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:25:06.473560   77373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:25:06.487972   77373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:25:06.498579   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:06.626975   77373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:25:06.736608   77373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:25:06.736681   77373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:25:06.742858   77373 start.go:563] Will wait 60s for crictl version
	I1011 22:25:06.742916   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:06.746699   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:25:06.785073   77373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:25:06.785172   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.812373   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.842453   77373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:25:04.645257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.143877   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.843849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:06.846526   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.846822   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:06.846870   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.847073   77373 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:25:06.851361   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:06.864316   77373 kubeadm.go:883] updating cluster {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:25:06.864426   77373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:25:06.864455   77373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:25:06.904225   77373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:25:06.904253   77373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:25:06.904307   77373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:06.904342   77373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.904360   77373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.904376   77373 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.904363   77373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.904475   77373 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.904499   77373 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 22:25:06.904480   77373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.905783   77373 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.905694   77373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.905680   77373 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.905686   77373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:07.057329   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.060095   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.080674   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1011 22:25:07.081598   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.085905   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.097740   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.106415   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.136780   77373 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1011 22:25:07.136834   77373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.136888   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.152692   77373 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1011 22:25:07.152730   77373 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.152784   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341838   77373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1011 22:25:07.341882   77373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.341890   77373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1011 22:25:07.341916   77373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.341929   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341947   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341973   77373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1011 22:25:07.341998   77373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1011 22:25:07.342007   77373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.342041   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.342014   77373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.342058   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.342053   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.342099   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.355230   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.355409   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.439441   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.439572   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.444043   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.444071   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.578269   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.578424   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.580474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.580516   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.580535   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.580606   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.451555   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.951138   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:09.144842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:11.643405   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.697848   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 22:25:07.697957   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.697984   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.722151   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1011 22:25:07.722269   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:07.734336   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 22:25:07.734449   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:07.734475   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.734489   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1011 22:25:07.734500   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 22:25:07.734508   77373 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734541   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734578   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:07.788345   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 22:25:07.788371   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1011 22:25:07.788446   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:07.816070   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1011 22:25:07.816308   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 22:25:07.816394   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:08.066781   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.943666   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.209065908s)
	I1011 22:25:09.943709   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1011 22:25:09.943750   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.20918304s)
	I1011 22:25:09.943771   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1011 22:25:09.943779   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.155317638s)
	I1011 22:25:09.943793   77373 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943796   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1011 22:25:09.943829   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.127421611s)
	I1011 22:25:09.943841   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943848   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1011 22:25:09.943878   77373 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.877065002s)
	I1011 22:25:09.943925   77373 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 22:25:09.943968   77373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.944013   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.951973   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:10.953032   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.644601   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.645917   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.641438   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.697578704s)
	I1011 22:25:13.641519   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1011 22:25:13.641523   77373 ssh_runner.go:235] Completed: which crictl: (3.697489585s)
	I1011 22:25:13.641556   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641597   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641598   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534810   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.893187916s)
	I1011 22:25:15.534865   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1011 22:25:15.534893   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.893219513s)
	I1011 22:25:15.534963   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534898   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:15.535027   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.452229   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.951490   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.952280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:18.143828   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:20.144712   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.707389   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.172401078s)
	I1011 22:25:17.707420   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.172369128s)
	I1011 22:25:17.707443   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1011 22:25:17.707474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:17.707476   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:17.707644   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:19.168147   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460475389s)
	I1011 22:25:19.168190   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1011 22:25:19.168156   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.460655676s)
	I1011 22:25:19.168221   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168242   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 22:25:19.168276   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168336   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.123906   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.955605804s)
	I1011 22:25:21.123945   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1011 22:25:21.123991   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.955631476s)
	I1011 22:25:21.124019   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1011 22:25:21.124030   77373 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.124068   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.773002   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 22:25:21.773050   77373 cache_images.go:123] Successfully loaded all cached images
	I1011 22:25:21.773057   77373 cache_images.go:92] duration metric: took 14.868794284s to LoadCachedImages
	I1011 22:25:21.773074   77373 kubeadm.go:934] updating node { 192.168.61.55 8443 v1.31.1 crio true true} ...
	I1011 22:25:21.773185   77373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-390487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:25:21.773265   77373 ssh_runner.go:195] Run: crio config
	I1011 22:25:21.821268   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:21.821291   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:21.821301   77373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:25:21.821321   77373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-390487 NodeName:no-preload-390487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:25:21.821490   77373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-390487"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:25:21.821564   77373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:25:21.832830   77373 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:25:21.832905   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:25:21.842726   77373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1011 22:25:21.859739   77373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:25:21.876192   77373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1011 22:25:21.893366   77373 ssh_runner.go:195] Run: grep 192.168.61.55	control-plane.minikube.internal$ /etc/hosts
	I1011 22:25:21.897435   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:21.909840   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:22.021697   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:25:22.039163   77373 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487 for IP: 192.168.61.55
	I1011 22:25:22.039187   77373 certs.go:194] generating shared ca certs ...
	I1011 22:25:22.039207   77373 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:25:22.039385   77373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:25:22.039449   77373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:25:22.039462   77373 certs.go:256] generating profile certs ...
	I1011 22:25:22.039587   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/client.key
	I1011 22:25:22.039668   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key.6a466d38
	I1011 22:25:22.039713   77373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key
	I1011 22:25:22.039858   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:25:22.039901   77373 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:25:22.039912   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:25:22.039959   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:25:22.040001   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:25:22.040029   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:25:22.040089   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:22.040914   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:25:22.077604   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:25:22.133879   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:25:22.164886   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:25:22.197655   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:25:22.229594   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:25:22.264506   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:25:22.287571   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:25:22.310555   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:25:22.333333   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:25:22.356094   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:25:22.380156   77373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:25:22.398056   77373 ssh_runner.go:195] Run: openssl version
	I1011 22:25:22.403799   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:25:22.415645   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420352   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420411   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.426457   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:25:22.438182   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:25:22.449704   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454778   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454840   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.460601   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:25:22.472587   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:25:22.485096   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489673   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489729   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.495547   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:25:22.507652   77373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:25:22.513081   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:25:22.519287   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:25:22.525159   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:25:22.531170   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:25:22.537321   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:25:22.543093   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:25:22.548832   77373 kubeadm.go:392] StartCluster: {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:25:22.548926   77373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:25:22.548972   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.594269   77373 cri.go:89] found id: ""
	I1011 22:25:22.594341   77373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:25:22.604950   77373 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:25:22.604976   77373 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:25:22.605025   77373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.452376   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.950987   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.644866   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:25.143773   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.144243   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.615035   77373 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:25:22.615951   77373 kubeconfig.go:125] found "no-preload-390487" server: "https://192.168.61.55:8443"
	I1011 22:25:22.618000   77373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:25:22.628327   77373 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.55
	I1011 22:25:22.628367   77373 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:25:22.628379   77373 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:25:22.628426   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.681709   77373 cri.go:89] found id: ""
	I1011 22:25:22.681769   77373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:25:22.697989   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:25:22.707772   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:25:22.707792   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:25:22.707838   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:25:22.716928   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:25:22.716984   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:25:22.726327   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:25:22.735769   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:25:22.735819   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:25:22.745468   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.754493   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:25:22.754552   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.764062   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:25:22.773234   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:25:22.773298   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:25:22.782913   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:25:22.792119   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:22.910184   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:23.868070   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.095326   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.164924   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.251769   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:25:24.251852   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.752110   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.252591   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.278468   77373 api_server.go:72] duration metric: took 1.026698113s to wait for apiserver process to appear ...
	I1011 22:25:25.278498   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:25:25.278521   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:25.278974   77373 api_server.go:269] stopped: https://192.168.61.55:8443/healthz: Get "https://192.168.61.55:8443/healthz": dial tcp 192.168.61.55:8443: connect: connection refused
	I1011 22:25:25.778778   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.951896   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.451534   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.012373   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.012412   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.012437   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.099444   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.099503   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.278723   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.284616   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.284647   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:28.779287   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.786100   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.786125   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:29.278680   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:29.285168   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:25:29.291497   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:25:29.291526   77373 api_server.go:131] duration metric: took 4.013020818s to wait for apiserver health ...
	I1011 22:25:29.291537   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:29.291545   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:29.293325   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:25:29.644410   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:32.144466   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:29.294582   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:25:29.306107   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:25:29.331655   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:25:29.346931   77373 system_pods.go:59] 8 kube-system pods found
	I1011 22:25:29.346973   77373 system_pods.go:61] "coredns-7c65d6cfc9-5z4p5" [a369ddfd-01d5-4d2a-a63b-ab36b26f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:25:29.346986   77373 system_pods.go:61] "etcd-no-preload-390487" [b9aa7965-9be2-43b4-a291-246e5f27fa00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:25:29.346998   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [17e9a39a-2084-4504-8f9c-602cad87536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:25:29.347004   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [c4dc9017-6062-444e-b11f-23762dc5ef3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:25:29.347010   77373 system_pods.go:61] "kube-proxy-82p2c" [555091e0-b40d-49a6-a964-80baf143c001] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:25:29.347029   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [dcfc8186-23f5-4744-93f8-080180f93be6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:25:29.347034   77373 system_pods.go:61] "metrics-server-6867b74b74-tk8fq" [8fb649e0-2af0-4655-8251-356873e2213e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:25:29.347041   77373 system_pods.go:61] "storage-provisioner" [a01f8ac1-6d29-4885-86a7-c7ef0c289b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:25:29.347047   77373 system_pods.go:74] duration metric: took 15.369022ms to wait for pod list to return data ...
	I1011 22:25:29.347055   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:25:29.352543   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:25:29.352576   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:25:29.352590   77373 node_conditions.go:105] duration metric: took 5.52943ms to run NodePressure ...
	I1011 22:25:29.352613   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:29.648681   77373 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652653   77373 kubeadm.go:739] kubelet initialised
	I1011 22:25:29.652671   77373 kubeadm.go:740] duration metric: took 3.972281ms waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652679   77373 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:25:29.658454   77373 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.663740   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663768   77373 pod_ready.go:82] duration metric: took 5.289381ms for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.663780   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663791   77373 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.668667   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668693   77373 pod_ready.go:82] duration metric: took 4.892171ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.668704   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668714   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.673134   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673157   77373 pod_ready.go:82] duration metric: took 4.432292ms for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.673168   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673177   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.734940   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734965   77373 pod_ready.go:82] duration metric: took 61.774649ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.734974   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734980   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134816   77373 pod_ready.go:93] pod "kube-proxy-82p2c" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:30.134843   77373 pod_ready.go:82] duration metric: took 399.851043ms for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134856   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:32.143137   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.451926   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:31.452961   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.145457   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.643721   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.143610   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.641435   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.951339   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:35.952408   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.450537   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.644336   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.144815   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.642041   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.143153   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.641922   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:41.641949   77373 pod_ready.go:82] duration metric: took 11.507084936s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:41.641962   77373 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.451326   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:42.451670   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.643191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.643486   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.648037   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.648090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.451907   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:46.950763   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.144086   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.144203   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.144498   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:47.649490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.148831   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.148997   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.451637   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:51.952434   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.643929   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.645968   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.149692   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.649774   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:54.456563   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.952097   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.143734   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.146350   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.148063   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.148608   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:59.451436   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.452136   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.643807   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.644664   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.149089   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.152410   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:26:03.952386   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.952562   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.645291   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.145051   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.146419   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.650259   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.149242   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.149684   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.452508   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.952101   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.953125   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.645067   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.646842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.149723   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.649874   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:15.451781   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:17.951419   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.144547   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.145544   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.151415   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.649239   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:20.450507   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.450680   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:23.643586   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.144617   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:24.149159   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.649041   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:24.451584   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.451685   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.643757   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.143786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.650003   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.148367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:28.952410   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.450990   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.644354   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.648376   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:33.951428   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.951901   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.143140   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.144224   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.649082   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:39.650580   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.148945   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.451431   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.951642   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.951753   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.644548   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.144055   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.649705   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.148731   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.952282   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.451649   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.643384   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:52.143369   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.150143   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.648664   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:49.951912   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.955275   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.144438   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.145153   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.149232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.648245   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.451964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.952084   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:58.643527   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:00.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.148916   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.452217   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.951461   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:03.143621   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:05.644225   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:04.148533   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.149378   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:03.952598   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.451802   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:07.645531   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.144141   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.144739   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:08.648935   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.649185   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:08.951257   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.951915   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:13.451012   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:14.643844   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:16.643951   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.651766   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.148176   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.148231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:15.452119   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.951679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.144439   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.644786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.647581   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.649450   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:19.952158   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.952425   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.143206   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.143587   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.148823   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:27:24.452366   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.950804   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:28.143916   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:30.644830   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:29.149277   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.648385   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:28.952135   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.452143   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.143604   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:35.149383   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.649481   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.148545   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:33.951885   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.450920   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:37.643707   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.644162   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.143479   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:38.649090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:41.148009   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:38.451524   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:40.954861   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:44.143555   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:46.143976   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:43.149295   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.648165   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.451177   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.451493   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.951335   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.145191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.643798   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.648963   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.150518   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.951782   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.451312   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:53.143194   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.143896   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.144275   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.647882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:54.648946   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:56.649832   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:54.951866   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.450974   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.144335   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.144508   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.148539   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.652535   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:59.452019   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.951319   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:03.643904   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.142780   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.149856   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:03.951539   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:05.955152   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.450679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.143240   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.144659   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.649122   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:11.148403   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.451221   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.952631   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.644135   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.143192   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.144402   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.148449   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.648534   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:15.450809   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.451351   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.644591   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.144568   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:18.147818   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:20.150891   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.951122   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:21.951323   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.643512   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:27.143639   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.648755   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.648851   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:24.450975   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.451800   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:29.147583   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.644088   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:32.137388   77526 pod_ready.go:82] duration metric: took 4m0.000065444s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:32.137437   77526 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:32.137454   77526 pod_ready.go:39] duration metric: took 4m13.67950194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:32.137478   77526 kubeadm.go:597] duration metric: took 4m21.517496572s to restartPrimaryControlPlane
	W1011 22:28:32.137532   77526 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:32.137562   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:29.150291   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.649055   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:28.951749   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:30.952578   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.149312   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:36.648952   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:33.451778   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:35.951964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:39.148016   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:41.149360   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.454387   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:40.952061   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:43.649642   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:46.148528   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:43.451280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:45.952227   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.451044   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.149006   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.649552   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.451478   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:52.951299   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:53.445808   77741 pod_ready.go:82] duration metric: took 4m0.000846456s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:53.445846   77741 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:53.445869   77741 pod_ready.go:39] duration metric: took 4m16.735338637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:53.445899   77741 kubeadm.go:597] duration metric: took 4m23.626843864s to restartPrimaryControlPlane
	W1011 22:28:53.445964   77741 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:53.445996   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:53.148309   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:55.149231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.243748   77526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.106148594s)
	I1011 22:28:58.243837   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:58.263915   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:58.281349   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:58.297636   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:58.297661   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:58.297710   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:58.311371   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:58.311444   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:58.330584   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:58.350348   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:58.350403   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:58.376417   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.390350   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:58.390399   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.404955   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:58.416263   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:58.416322   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:58.425942   77526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:58.478782   77526 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:28:58.478835   77526 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:58.590185   77526 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:58.590333   77526 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:58.590451   77526 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:28:58.598371   77526 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:58.600253   77526 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:58.600357   77526 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:58.600458   77526 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:58.600569   77526 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:58.600657   77526 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:58.600761   77526 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:58.600827   77526 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:58.600913   77526 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:58.601018   77526 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:58.601122   77526 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:58.601250   77526 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:58.601335   77526 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:58.601417   77526 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:58.951248   77526 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:59.187453   77526 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:28:59.496055   77526 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:59.583363   77526 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:59.747699   77526 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:59.748339   77526 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:59.750963   77526 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:59.752710   77526 out.go:235]   - Booting up control plane ...
	I1011 22:28:59.752858   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:59.752956   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:59.753174   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:59.770682   77526 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:59.776919   77526 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:59.776989   77526 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:59.900964   77526 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:28:59.901122   77526 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:00.402400   77526 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.862362ms
	I1011 22:29:00.402529   77526 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:28:57.648367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:00.148371   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:02.153536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:05.905921   77526 kubeadm.go:310] [api-check] The API server is healthy after 5.501955207s
	I1011 22:29:05.918054   77526 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:05.936720   77526 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:05.982293   77526 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:05.982571   77526 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-223942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:06.007168   77526 kubeadm.go:310] [bootstrap-token] Using token: a4lu2p.4yfrrazoy97j5yu0
	I1011 22:29:06.008642   77526 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:06.008749   77526 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:06.020393   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:06.032191   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:06.039269   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:06.043990   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:06.053648   77526 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:06.312388   77526 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:06.740160   77526 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:07.315305   77526 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:07.317697   77526 kubeadm.go:310] 
	I1011 22:29:07.317793   77526 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:07.317806   77526 kubeadm.go:310] 
	I1011 22:29:07.317929   77526 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:07.317950   77526 kubeadm.go:310] 
	I1011 22:29:07.318009   77526 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:07.318126   77526 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:07.318222   77526 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:07.318232   77526 kubeadm.go:310] 
	I1011 22:29:07.318281   77526 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:07.318289   77526 kubeadm.go:310] 
	I1011 22:29:07.318339   77526 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:07.318350   77526 kubeadm.go:310] 
	I1011 22:29:07.318424   77526 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:07.318528   77526 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:07.318630   77526 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:07.318644   77526 kubeadm.go:310] 
	I1011 22:29:07.318750   77526 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:07.318823   77526 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:07.318830   77526 kubeadm.go:310] 
	I1011 22:29:07.318913   77526 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319086   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:07.319124   77526 kubeadm.go:310] 	--control-plane 
	I1011 22:29:07.319133   77526 kubeadm.go:310] 
	I1011 22:29:07.319256   77526 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:07.319264   77526 kubeadm.go:310] 
	I1011 22:29:07.319366   77526 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319505   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:07.321368   77526 kubeadm.go:310] W1011 22:28:58.449635    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321691   77526 kubeadm.go:310] W1011 22:28:58.450407    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321866   77526 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:07.321888   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:29:07.321899   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:07.323580   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:07.324762   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:07.335614   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:04.648441   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:06.648506   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:07.354851   77526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:07.355473   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:07.355479   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-223942 minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=embed-certs-223942 minikube.k8s.io/primary=true
	I1011 22:29:07.397703   77526 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:07.581167   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.081395   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.582200   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.081862   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.581361   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.081246   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.581754   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.081988   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.179021   77526 kubeadm.go:1113] duration metric: took 3.82416989s to wait for elevateKubeSystemPrivileges
	I1011 22:29:11.179061   77526 kubeadm.go:394] duration metric: took 5m0.606049956s to StartCluster
	I1011 22:29:11.179086   77526 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.179171   77526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:11.181572   77526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.181873   77526 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:11.181938   77526 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:11.182035   77526 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223942"
	I1011 22:29:11.182059   77526 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223942"
	I1011 22:29:11.182060   77526 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223942"
	W1011 22:29:11.182070   77526 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:11.182078   77526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223942"
	I1011 22:29:11.182102   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182114   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:11.182091   77526 addons.go:69] Setting metrics-server=true in profile "embed-certs-223942"
	I1011 22:29:11.182147   77526 addons.go:234] Setting addon metrics-server=true in "embed-certs-223942"
	W1011 22:29:11.182161   77526 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:11.182196   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182515   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182558   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182579   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182692   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.183573   77526 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:11.184930   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:11.198456   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1011 22:29:11.198666   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I1011 22:29:11.199044   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199141   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199592   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199607   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199726   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199744   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199950   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200104   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200248   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.200557   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.200608   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.201637   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1011 22:29:11.202066   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.202541   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.202560   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.202894   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.203434   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.203474   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.204227   77526 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223942"
	W1011 22:29:11.204249   77526 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:11.204281   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.204663   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.204707   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.218765   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I1011 22:29:11.218894   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I1011 22:29:11.219238   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219244   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219747   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219772   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.219949   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219970   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.220019   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220167   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220232   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220785   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220847   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1011 22:29:11.221152   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.221591   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.221614   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.222116   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.222135   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222401   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222916   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.222955   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.224006   77526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:11.224007   77526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:11.225424   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:11.225455   77526 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:11.225474   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.226095   77526 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.226115   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:11.226131   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.228914   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229448   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.229472   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229542   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229584   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.229744   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230021   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.230025   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230037   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.230118   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.230496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.230648   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230781   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230897   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.238742   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1011 22:29:11.239211   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.239762   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.239786   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.240061   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.240238   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.241740   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.241967   77526 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.241986   77526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:11.242007   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.244886   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245237   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.245260   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245501   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.245684   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.245882   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.246052   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.365926   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:11.391766   77526 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401923   77526 node_ready.go:49] node "embed-certs-223942" has status "Ready":"True"
	I1011 22:29:11.401943   77526 node_ready.go:38] duration metric: took 10.139287ms for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401952   77526 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:11.406561   77526 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:11.460959   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:11.460992   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:11.475600   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.481436   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:11.481465   77526 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:11.515478   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.515500   77526 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:11.558164   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.569398   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.795782   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.795805   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796093   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:11.796119   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796137   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.796152   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.796163   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796373   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796389   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809155   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.809176   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.809439   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.809457   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809463   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475441   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475469   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.475720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475769   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.475789   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.475805   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475815   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.476016   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.476027   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.476031   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.476041   77526 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223942"
	I1011 22:29:12.503190   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503219   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503530   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503574   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.503588   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503598   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503834   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503850   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.505379   77526 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1011 22:29:09.149809   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:11.650232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:12.506382   77526 addons.go:510] duration metric: took 1.324453305s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1011 22:29:13.412840   77526 pod_ready.go:103] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:13.918905   77526 pod_ready.go:93] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:13.918926   77526 pod_ready.go:82] duration metric: took 2.512345346s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:13.918936   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:15.925307   77526 pod_ready.go:103] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:14.149051   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:16.649622   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:17.925327   77526 pod_ready.go:93] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.925353   77526 pod_ready.go:82] duration metric: took 4.006410198s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.925366   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929846   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.929872   77526 pod_ready.go:82] duration metric: took 4.495642ms for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929883   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933635   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.933652   77526 pod_ready.go:82] duration metric: took 3.761139ms for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933661   77526 pod_ready.go:39] duration metric: took 6.531698315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:17.933677   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:17.933732   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:17.950153   77526 api_server.go:72] duration metric: took 6.768243331s to wait for apiserver process to appear ...
	I1011 22:29:17.950174   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:17.950192   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:29:17.953743   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:29:17.954586   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:17.954610   77526 api_server.go:131] duration metric: took 4.428307ms to wait for apiserver health ...
	I1011 22:29:17.954629   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:17.959411   77526 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:17.959432   77526 system_pods.go:61] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.959438   77526 system_pods.go:61] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.959443   77526 system_pods.go:61] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.959447   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.959451   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.959454   77526 system_pods.go:61] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.959457   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.959468   77526 system_pods.go:61] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.959473   77526 system_pods.go:61] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.959480   77526 system_pods.go:74] duration metric: took 4.84106ms to wait for pod list to return data ...
	I1011 22:29:17.959488   77526 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:17.962273   77526 default_sa.go:45] found service account: "default"
	I1011 22:29:17.962294   77526 default_sa.go:55] duration metric: took 2.80012ms for default service account to be created ...
	I1011 22:29:17.962302   77526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:17.966653   77526 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:17.966675   77526 system_pods.go:89] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.966681   77526 system_pods.go:89] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.966686   77526 system_pods.go:89] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.966691   77526 system_pods.go:89] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.966695   77526 system_pods.go:89] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.966698   77526 system_pods.go:89] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.966702   77526 system_pods.go:89] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.966741   77526 system_pods.go:89] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.966751   77526 system_pods.go:89] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.966759   77526 system_pods.go:126] duration metric: took 4.452902ms to wait for k8s-apps to be running ...
	I1011 22:29:17.966766   77526 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:17.966807   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:17.982751   77526 system_svc.go:56] duration metric: took 15.979158ms WaitForService to wait for kubelet
	I1011 22:29:17.982770   77526 kubeadm.go:582] duration metric: took 6.800865436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:17.982788   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:17.985340   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:17.985361   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:17.985373   77526 node_conditions.go:105] duration metric: took 2.578879ms to run NodePressure ...
	I1011 22:29:17.985385   77526 start.go:241] waiting for startup goroutines ...
	I1011 22:29:17.985398   77526 start.go:246] waiting for cluster config update ...
	I1011 22:29:17.985415   77526 start.go:255] writing updated cluster config ...
	I1011 22:29:17.985668   77526 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:18.034091   77526 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:18.036159   77526 out.go:177] * Done! kubectl is now configured to use "embed-certs-223942" cluster and "default" namespace by default
	I1011 22:29:19.671974   77741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225955809s)
	I1011 22:29:19.672048   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:19.689229   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:29:19.701141   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:29:19.714596   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:29:19.714630   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:29:19.714674   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:29:19.729207   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:29:19.729273   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:29:19.739052   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:29:19.748101   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:29:19.748162   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:29:19.757518   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.766689   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:29:19.766754   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.776197   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:29:19.785329   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:29:19.785381   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:29:19.794742   77741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:29:19.837814   77741 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:29:19.837936   77741 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:29:19.956401   77741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:29:19.956502   77741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:29:19.956574   77741 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:29:19.965603   77741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:29:19.967637   77741 out.go:235]   - Generating certificates and keys ...
	I1011 22:29:19.967726   77741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:29:19.967793   77741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:29:19.967875   77741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:29:19.967965   77741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:29:19.968066   77741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:29:19.968139   77741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:29:19.968224   77741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:29:19.968319   77741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:29:19.968435   77741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:29:19.968545   77741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:29:19.968608   77741 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:29:19.968701   77741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:29:20.266256   77741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:29:20.353124   77741 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:29:20.693912   77741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:29:20.814227   77741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:29:21.028714   77741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:29:21.029382   77741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:29:21.032065   77741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:29:19.149346   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.648583   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.033900   77741 out.go:235]   - Booting up control plane ...
	I1011 22:29:21.034020   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:29:21.034134   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:29:21.034236   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:29:21.053259   77741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:29:21.060157   77741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:29:21.060229   77741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:29:21.190140   77741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:29:21.190325   77741 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:21.691954   77741 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78398ms
	I1011 22:29:21.692069   77741 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:29:26.696518   77741 kubeadm.go:310] [api-check] The API server is healthy after 5.002229227s
	I1011 22:29:26.710581   77741 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:26.726686   77741 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:26.759596   77741 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:26.759894   77741 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-070708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:26.769529   77741 kubeadm.go:310] [bootstrap-token] Using token: dhosfn.441jcramrxgiydi4
	I1011 22:29:24.149380   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.647490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.770660   77741 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:26.770801   77741 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:26.775859   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:26.783572   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:26.789736   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:26.793026   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:26.797814   77741 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:27.102055   77741 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:27.537636   77741 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:28.102099   77741 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:28.103130   77741 kubeadm.go:310] 
	I1011 22:29:28.103241   77741 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:28.103264   77741 kubeadm.go:310] 
	I1011 22:29:28.103371   77741 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:28.103379   77741 kubeadm.go:310] 
	I1011 22:29:28.103400   77741 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:28.103454   77741 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:28.103506   77741 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:28.103510   77741 kubeadm.go:310] 
	I1011 22:29:28.103565   77741 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:28.103569   77741 kubeadm.go:310] 
	I1011 22:29:28.103618   77741 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:28.103624   77741 kubeadm.go:310] 
	I1011 22:29:28.103666   77741 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:28.103778   77741 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:28.103874   77741 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:28.103882   77741 kubeadm.go:310] 
	I1011 22:29:28.103960   77741 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:28.104023   77741 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:28.104029   77741 kubeadm.go:310] 
	I1011 22:29:28.104096   77741 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104179   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:28.104199   77741 kubeadm.go:310] 	--control-plane 
	I1011 22:29:28.104205   77741 kubeadm.go:310] 
	I1011 22:29:28.104271   77741 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:28.104277   77741 kubeadm.go:310] 
	I1011 22:29:28.104384   77741 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104513   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:28.105322   77741 kubeadm.go:310] W1011 22:29:19.811300    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105623   77741 kubeadm.go:310] W1011 22:29:19.812133    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105772   77741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:28.105796   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:29:28.105808   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:28.107671   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:28.108911   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:28.121190   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:28.143442   77741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:28.143523   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.143537   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-070708 minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=default-k8s-diff-port-070708 minikube.k8s.io/primary=true
	I1011 22:29:28.380171   77741 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:28.380244   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.649448   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:31.147882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:28.880541   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.380686   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.880953   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.381236   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.880946   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.380516   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.880841   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.380874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.880874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.969809   77741 kubeadm.go:1113] duration metric: took 4.826361525s to wait for elevateKubeSystemPrivileges
	I1011 22:29:32.969844   77741 kubeadm.go:394] duration metric: took 5m3.206576288s to StartCluster
	I1011 22:29:32.969864   77741 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.969949   77741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:32.972053   77741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.972321   77741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:32.972419   77741 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:32.972545   77741 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972564   77741 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972572   77741 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:32.972580   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:32.972577   77741 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972601   77741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-070708"
	I1011 22:29:32.972590   77741 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972621   77741 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972631   77741 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:32.972676   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972605   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972952   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.972982   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973051   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973088   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973111   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973143   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973995   77741 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:32.975387   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:32.989010   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1011 22:29:32.989449   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.989866   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I1011 22:29:32.990100   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990127   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.990213   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.990478   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.990668   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990692   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.991068   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991071   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.991110   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1011 22:29:32.991671   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991703   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991966   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.992453   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.992486   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.992808   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.992950   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:32.995986   77741 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.996004   77741 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:32.996031   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.996271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.996311   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.010650   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I1011 22:29:33.010949   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1011 22:29:33.011111   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011350   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I1011 22:29:33.011490   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.011509   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.011838   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011936   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012113   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.012272   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012283   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.012338   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.012663   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012877   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012897   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.013271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:33.013307   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.013511   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.013691   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.014538   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.015400   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.016387   77741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:33.017187   77741 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:33.018090   77741 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.018111   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:33.018130   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.018972   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:33.018994   77741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:33.019015   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.021827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022205   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.022226   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.022513   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.022704   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.022865   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.023070   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023552   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.023574   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.024067   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.024222   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.024376   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.030089   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I1011 22:29:33.030477   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.030929   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.030954   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.031352   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.031571   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.033098   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.033335   77741 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.033351   77741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:33.033366   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.036390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.036758   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.036780   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.037025   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.037173   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.037322   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.037467   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.201955   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:33.220870   77741 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229595   77741 node_ready.go:49] node "default-k8s-diff-port-070708" has status "Ready":"True"
	I1011 22:29:33.229615   77741 node_ready.go:38] duration metric: took 8.713422ms for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229623   77741 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:33.237626   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:33.298146   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:33.298166   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:33.308268   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.320862   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.346501   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:33.346536   77741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:33.406404   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.406435   77741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:33.480527   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.629133   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629162   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.629545   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.629564   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.629565   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.629616   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629625   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.630896   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.630904   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.630918   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.636620   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.636640   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.636979   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.636989   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.637001   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305476   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305507   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.305773   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.305798   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305809   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305821   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.306123   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.306168   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.306128   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.756210   77741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.275647241s)
	I1011 22:29:34.756257   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756271   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756536   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756558   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756567   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756575   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756844   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756891   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756911   77741 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-070708"
	I1011 22:29:34.756872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.759057   77741 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1011 22:29:33.148846   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:35.649536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:34.760328   77741 addons.go:510] duration metric: took 1.787917365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1011 22:29:34.764676   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:34.764703   77741 pod_ready.go:82] duration metric: took 1.527054334s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:34.764716   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773717   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.773739   77741 pod_ready.go:82] duration metric: took 1.009014594s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773747   77741 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779537   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.779554   77741 pod_ready.go:82] duration metric: took 5.801388ms for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779562   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785272   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:36.785302   77741 pod_ready.go:82] duration metric: took 1.005732291s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785316   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:38.790774   77741 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.790257   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.790285   77741 pod_ready.go:82] duration metric: took 4.004960127s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.790298   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794434   77741 pod_ready.go:93] pod "kube-proxy-f5jxp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.794457   77741 pod_ready.go:82] duration metric: took 4.15174ms for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794468   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797928   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.797942   77741 pod_ready.go:82] duration metric: took 3.468527ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797949   77741 pod_ready.go:39] duration metric: took 7.568316879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:40.797960   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:40.798002   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:40.813652   77741 api_server.go:72] duration metric: took 7.841294422s to wait for apiserver process to appear ...
	I1011 22:29:40.813672   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:40.813689   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:29:40.817412   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:29:40.818090   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:40.818107   77741 api_server.go:131] duration metric: took 4.42852ms to wait for apiserver health ...
	I1011 22:29:40.818114   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:40.823188   77741 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:40.823213   77741 system_pods.go:61] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:40.823221   77741 system_pods.go:61] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:40.823227   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:40.823233   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:40.823248   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:40.823255   77741 system_pods.go:61] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:40.823263   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:40.823273   77741 system_pods.go:61] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:40.823284   77741 system_pods.go:61] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:40.823296   77741 system_pods.go:74] duration metric: took 5.17626ms to wait for pod list to return data ...
	I1011 22:29:40.823307   77741 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:40.825321   77741 default_sa.go:45] found service account: "default"
	I1011 22:29:40.825336   77741 default_sa.go:55] duration metric: took 2.021143ms for default service account to be created ...
	I1011 22:29:40.825342   77741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:41.026940   77741 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:41.026968   77741 system_pods.go:89] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:41.026973   77741 system_pods.go:89] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:41.026978   77741 system_pods.go:89] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:41.026982   77741 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:41.026985   77741 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:41.026989   77741 system_pods.go:89] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:41.026992   77741 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:41.026998   77741 system_pods.go:89] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:41.027001   77741 system_pods.go:89] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:41.027009   77741 system_pods.go:126] duration metric: took 201.663243ms to wait for k8s-apps to be running ...
	I1011 22:29:41.027026   77741 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:41.027069   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:41.042219   77741 system_svc.go:56] duration metric: took 15.183864ms WaitForService to wait for kubelet
	I1011 22:29:41.042245   77741 kubeadm.go:582] duration metric: took 8.069890136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:41.042260   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:41.224020   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:41.224044   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:41.224057   77741 node_conditions.go:105] duration metric: took 181.791827ms to run NodePressure ...
	I1011 22:29:41.224070   77741 start.go:241] waiting for startup goroutines ...
	I1011 22:29:41.224078   77741 start.go:246] waiting for cluster config update ...
	I1011 22:29:41.224091   77741 start.go:255] writing updated cluster config ...
	I1011 22:29:41.224324   77741 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:41.270922   77741 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:41.272826   77741 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-070708" cluster and "default" namespace by default
	I1011 22:29:38.149579   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.648994   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:41.642042   77373 pod_ready.go:82] duration metric: took 4m0.000063385s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	E1011 22:29:41.642084   77373 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 22:29:41.642099   77373 pod_ready.go:39] duration metric: took 4m11.989411916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:41.642124   77373 kubeadm.go:597] duration metric: took 4m19.037142189s to restartPrimaryControlPlane
	W1011 22:29:41.642171   77373 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:29:41.642194   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:08.331378   77373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.689152762s)
	I1011 22:30:08.331467   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:08.348300   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:30:08.359480   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:08.370317   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:08.370344   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:08.370400   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:08.381317   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:08.381392   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:08.392591   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:08.403628   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:08.403695   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:08.415304   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.425512   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:08.425585   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.436525   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:08.447575   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:08.447644   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:08.458910   77373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:08.507988   77373 kubeadm.go:310] W1011 22:30:08.465544    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.508469   77373 kubeadm.go:310] W1011 22:30:08.466388    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.640893   77373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:16.843613   77373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:30:16.843665   77373 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:16.843739   77373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:16.843849   77373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:16.843963   77373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:30:16.844020   77373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:16.845663   77373 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:16.845745   77373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:16.845804   77373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:16.845880   77373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:16.845929   77373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:16.845994   77373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:16.846041   77373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:16.846094   77373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:16.846145   77373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:16.846207   77373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:16.846272   77373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:16.846305   77373 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:16.846355   77373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:16.846402   77373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:16.846453   77373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:30:16.846503   77373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:16.846566   77373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:16.846663   77373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:16.846762   77373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:16.846845   77373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:16.848425   77373 out.go:235]   - Booting up control plane ...
	I1011 22:30:16.848538   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:16.848673   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:16.848787   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:16.848925   77373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:16.849039   77373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:16.849076   77373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:16.849210   77373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:30:16.849351   77373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:30:16.849437   77373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.393174ms
	I1011 22:30:16.849498   77373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:30:16.849550   77373 kubeadm.go:310] [api-check] The API server is healthy after 5.001429588s
	I1011 22:30:16.849648   77373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:30:16.849781   77373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:30:16.849869   77373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:30:16.850052   77373 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-390487 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:30:16.850110   77373 kubeadm.go:310] [bootstrap-token] Using token: fihl2i.d50idwk2axnrw24u
	I1011 22:30:16.851665   77373 out.go:235]   - Configuring RBAC rules ...
	I1011 22:30:16.851802   77373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:30:16.851885   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:30:16.852036   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:30:16.852185   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:30:16.852323   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:30:16.852402   77373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:30:16.852499   77373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:30:16.852541   77373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:30:16.852580   77373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:30:16.852586   77373 kubeadm.go:310] 
	I1011 22:30:16.852634   77373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:30:16.852640   77373 kubeadm.go:310] 
	I1011 22:30:16.852705   77373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:30:16.852711   77373 kubeadm.go:310] 
	I1011 22:30:16.852732   77373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:30:16.852805   77373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:30:16.852878   77373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:30:16.852891   77373 kubeadm.go:310] 
	I1011 22:30:16.852990   77373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:30:16.853005   77373 kubeadm.go:310] 
	I1011 22:30:16.853073   77373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:30:16.853086   77373 kubeadm.go:310] 
	I1011 22:30:16.853162   77373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:30:16.853282   77373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:30:16.853341   77373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:30:16.853347   77373 kubeadm.go:310] 
	I1011 22:30:16.853424   77373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:30:16.853529   77373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:30:16.853540   77373 kubeadm.go:310] 
	I1011 22:30:16.853643   77373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.853789   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:30:16.853824   77373 kubeadm.go:310] 	--control-plane 
	I1011 22:30:16.853832   77373 kubeadm.go:310] 
	I1011 22:30:16.853954   77373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:30:16.853964   77373 kubeadm.go:310] 
	I1011 22:30:16.854083   77373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.854248   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:30:16.854264   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:30:16.854273   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:30:16.855848   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:30:16.857089   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:30:16.868823   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:30:16.895913   77373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:30:16.896017   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:16.896028   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-390487 minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=no-preload-390487 minikube.k8s.io/primary=true
	I1011 22:30:16.918531   77373 ops.go:34] apiserver oom_adj: -16
	I1011 22:30:17.097050   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:17.598029   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:18.098092   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:18.597526   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.098157   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.597575   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.097754   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.597957   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.097558   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.213123   77373 kubeadm.go:1113] duration metric: took 4.317171517s to wait for elevateKubeSystemPrivileges
	I1011 22:30:21.213168   77373 kubeadm.go:394] duration metric: took 4m58.664336163s to StartCluster
	I1011 22:30:21.213191   77373 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.213283   77373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:30:21.215630   77373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.215852   77373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:30:21.215989   77373 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:30:21.216063   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:30:21.216088   77373 addons.go:69] Setting storage-provisioner=true in profile "no-preload-390487"
	I1011 22:30:21.216109   77373 addons.go:234] Setting addon storage-provisioner=true in "no-preload-390487"
	I1011 22:30:21.216102   77373 addons.go:69] Setting default-storageclass=true in profile "no-preload-390487"
	W1011 22:30:21.216118   77373 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:30:21.216128   77373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-390487"
	I1011 22:30:21.216131   77373 addons.go:69] Setting metrics-server=true in profile "no-preload-390487"
	I1011 22:30:21.216171   77373 addons.go:234] Setting addon metrics-server=true in "no-preload-390487"
	W1011 22:30:21.216182   77373 addons.go:243] addon metrics-server should already be in state true
	I1011 22:30:21.216218   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216149   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216627   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216644   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216662   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216737   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.217280   77373 out.go:177] * Verifying Kubernetes components...
	I1011 22:30:21.218773   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:30:21.232485   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I1011 22:30:21.232801   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1011 22:30:21.233029   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233243   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233615   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233642   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233762   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233785   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233966   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234065   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234485   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234520   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.234611   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234669   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.235151   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1011 22:30:21.235614   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.236082   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.236106   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.236479   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.236777   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.240463   77373 addons.go:234] Setting addon default-storageclass=true in "no-preload-390487"
	W1011 22:30:21.240483   77373 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:30:21.240512   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.240874   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.240916   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.250949   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I1011 22:30:21.251469   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.251958   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.251983   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.252397   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.252586   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.253093   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1011 22:30:21.253443   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.253949   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.253966   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.254413   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.254479   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.254605   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.256241   77373 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:30:21.256246   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.257646   77373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:30:21.257651   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:30:21.257712   77373 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:30:21.257736   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.258740   77373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.258761   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:30:21.258779   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.259764   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1011 22:30:21.260129   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.260673   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.260697   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.261024   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.261691   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.261902   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.261949   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.262376   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.262401   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262655   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262698   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.262901   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263233   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.263339   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.263345   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263511   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.263523   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.263700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263807   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263942   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.302779   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1011 22:30:21.303319   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.303864   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.303888   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.304289   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.304516   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.306544   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.306810   77373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.306829   77373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:30:21.306852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.309788   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310242   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.310268   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310466   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.310646   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.310786   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.310911   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.439567   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:30:21.477421   77373 node_ready.go:35] waiting up to 6m0s for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.539701   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.544312   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.548001   77373 node_ready.go:49] node "no-preload-390487" has status "Ready":"True"
	I1011 22:30:21.548022   77373 node_ready.go:38] duration metric: took 70.568638ms for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.548032   77373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:21.576393   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:21.585171   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:30:21.585197   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:30:21.681671   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:30:21.681698   77373 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:30:21.725963   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:21.725988   77373 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:30:21.759564   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:22.490072   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490099   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490219   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490236   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490470   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490494   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490504   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490512   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490596   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490596   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490627   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490642   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490653   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490883   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490899   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490922   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490981   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490996   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.491008   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.509939   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.509972   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.510355   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.510371   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.510421   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:23.029621   77373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.270011552s)
	I1011 22:30:23.029675   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.029691   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.029972   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.029989   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.029999   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.030008   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.030228   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.030242   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.030253   77373 addons.go:475] Verifying addon metrics-server=true in "no-preload-390487"
	I1011 22:30:23.031821   77373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1011 22:30:23.033206   77373 addons.go:510] duration metric: took 1.817229636s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1011 22:30:23.583317   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.583341   77373 pod_ready.go:82] duration metric: took 2.006915507s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.583350   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588077   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.588094   77373 pod_ready.go:82] duration metric: took 4.738751ms for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588103   77373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592411   77373 pod_ready.go:93] pod "etcd-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.592429   77373 pod_ready.go:82] duration metric: took 4.320594ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592437   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:25.599226   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:28.107173   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:29.598395   77373 pod_ready.go:93] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.598422   77373 pod_ready.go:82] duration metric: took 6.005976584s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.598438   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603104   77373 pod_ready.go:93] pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.603123   77373 pod_ready.go:82] duration metric: took 4.67757ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603133   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606558   77373 pod_ready.go:93] pod "kube-proxy-4g8nw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.606574   77373 pod_ready.go:82] duration metric: took 3.433207ms for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606582   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610559   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.610575   77373 pod_ready.go:82] duration metric: took 3.985639ms for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610582   77373 pod_ready.go:39] duration metric: took 8.062539556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:29.610598   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:30:29.610667   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:30:29.625884   77373 api_server.go:72] duration metric: took 8.409998013s to wait for apiserver process to appear ...
	I1011 22:30:29.625906   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:30:29.625925   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:30:29.629905   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:30:29.631557   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:30:29.631575   77373 api_server.go:131] duration metric: took 5.661997ms to wait for apiserver health ...
	I1011 22:30:29.631583   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:30:29.637936   77373 system_pods.go:59] 9 kube-system pods found
	I1011 22:30:29.637963   77373 system_pods.go:61] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.637970   77373 system_pods.go:61] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.637974   77373 system_pods.go:61] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.637979   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.637984   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.637989   77373 system_pods.go:61] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.637997   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.638010   77373 system_pods.go:61] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.638018   77373 system_pods.go:61] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.638027   77373 system_pods.go:74] duration metric: took 6.437989ms to wait for pod list to return data ...
	I1011 22:30:29.638034   77373 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:30:29.640483   77373 default_sa.go:45] found service account: "default"
	I1011 22:30:29.640499   77373 default_sa.go:55] duration metric: took 2.455351ms for default service account to be created ...
	I1011 22:30:29.640508   77373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:30:29.800014   77373 system_pods.go:86] 9 kube-system pods found
	I1011 22:30:29.800043   77373 system_pods.go:89] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.800049   77373 system_pods.go:89] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.800053   77373 system_pods.go:89] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.800057   77373 system_pods.go:89] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.800060   77373 system_pods.go:89] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.800064   77373 system_pods.go:89] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.800069   77373 system_pods.go:89] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.800074   77373 system_pods.go:89] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.800078   77373 system_pods.go:89] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.800086   77373 system_pods.go:126] duration metric: took 159.572896ms to wait for k8s-apps to be running ...
	I1011 22:30:29.800093   77373 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:30:29.800138   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:29.815064   77373 system_svc.go:56] duration metric: took 14.962996ms WaitForService to wait for kubelet
	I1011 22:30:29.815090   77373 kubeadm.go:582] duration metric: took 8.599206932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:30:29.815106   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:30:29.997185   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:30:29.997214   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:30:29.997224   77373 node_conditions.go:105] duration metric: took 182.114064ms to run NodePressure ...
	I1011 22:30:29.997235   77373 start.go:241] waiting for startup goroutines ...
	I1011 22:30:29.997242   77373 start.go:246] waiting for cluster config update ...
	I1011 22:30:29.997254   77373 start.go:255] writing updated cluster config ...
	I1011 22:30:29.997529   77373 ssh_runner.go:195] Run: rm -f paused
	I1011 22:30:30.044917   77373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:30:30.046918   77373 out.go:177] * Done! kubectl is now configured to use "no-preload-390487" cluster and "default" namespace by default
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 
	
	
	==> CRI-O <==
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.000253275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686516000231639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c375da90-422a-4cef-a0f2-668d3a6f36e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.000656027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0027c531-2dc4-49c8-bb66-715f259f9229 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.000771969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0027c531-2dc4-49c8-bb66-715f259f9229 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.000825455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0027c531-2dc4-49c8-bb66-715f259f9229 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.032493862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0e5753f-6379-4c8c-9952-b8cf3814e372 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.032593855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0e5753f-6379-4c8c-9952-b8cf3814e372 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.033503006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abd34430-dc86-4d5e-abe8-83afcd542a62 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.033949255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686516033920115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abd34430-dc86-4d5e-abe8-83afcd542a62 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.034416400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9163db8f-bc0f-4f89-bfe1-8f494bbe30b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.034484665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9163db8f-bc0f-4f89-bfe1-8f494bbe30b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.034540810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9163db8f-bc0f-4f89-bfe1-8f494bbe30b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.068143329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40f8f98b-7139-44f9-9c54-670e6502f0fe name=/runtime.v1.RuntimeService/Version
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.068256151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40f8f98b-7139-44f9-9c54-670e6502f0fe name=/runtime.v1.RuntimeService/Version
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.069622134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e929334-27e8-44a8-9736-e881a629e354 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.070175932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686516070148861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e929334-27e8-44a8-9736-e881a629e354 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.071129001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7564e534-53b8-4afc-826b-a828fe5ddfa8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.071199404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7564e534-53b8-4afc-826b-a828fe5ddfa8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.071246362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7564e534-53b8-4afc-826b-a828fe5ddfa8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.104278628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec58217f-9673-4cdd-9b91-89b872b15062 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.104376396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec58217f-9673-4cdd-9b91-89b872b15062 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.105836224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f1ed882-e768-4b65-8409-96eafd2c3226 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.106263997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686516106238343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f1ed882-e768-4b65-8409-96eafd2c3226 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.106804933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=188fd868-d047-43b0-8150-f848c869c8e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.106871695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=188fd868-d047-43b0-8150-f848c869c8e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:41:56 old-k8s-version-323416 crio[634]: time="2024-10-11 22:41:56.106906250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=188fd868-d047-43b0-8150-f848c869c8e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct11 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050928] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.110729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.580711] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.636937] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.157348] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.054654] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064708] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.165294] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.159768] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.272781] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.674030] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.066044] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.222707] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[Oct11 22:25] kauditd_printk_skb: 46 callbacks suppressed
	[Oct11 22:28] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Oct11 22:30] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.064434] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:41:56 up 17 min,  0 users,  load average: 0.26, 0.09, 0.02
	Linux old-k8s-version-323416 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000a70fc0)
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: goroutine 150 [select]:
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bddef0, 0x4f0ac20, 0xc00073af00, 0x1, 0xc0001000c0)
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0003a8380, 0xc0001000c0)
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a68870, 0xc00094a520)
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 11 22:41:50 old-k8s-version-323416 kubelet[6532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 11 22:41:50 old-k8s-version-323416 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 11 22:41:50 old-k8s-version-323416 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 11 22:41:51 old-k8s-version-323416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 11 22:41:51 old-k8s-version-323416 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 11 22:41:51 old-k8s-version-323416 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 11 22:41:51 old-k8s-version-323416 kubelet[6541]: I1011 22:41:51.395825    6541 server.go:416] Version: v1.20.0
	Oct 11 22:41:51 old-k8s-version-323416 kubelet[6541]: I1011 22:41:51.396073    6541 server.go:837] Client rotation is on, will bootstrap in background
	Oct 11 22:41:51 old-k8s-version-323416 kubelet[6541]: I1011 22:41:51.397983    6541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 11 22:41:51 old-k8s-version-323416 kubelet[6541]: W1011 22:41:51.399020    6541 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 11 22:41:51 old-k8s-version-323416 kubelet[6541]: I1011 22:41:51.399112    6541 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416
E1011 22:41:57.614749   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (225.099246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-323416" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (398.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223942 -n embed-certs-223942
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-11 22:44:58.734553764 +0000 UTC m=+6418.748911277
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-223942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-223942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.04µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-223942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-223942 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-223942 logs -n 25: (1.181556298s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	| start   | -p newest-cni-555648 --memory=2200 --alsologtostderr   | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	| addons  | enable metrics-server -p newest-cni-555648             | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-555648                                   | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:44:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:44:04.929267   84310 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:44:04.929378   84310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:44:04.929386   84310 out.go:358] Setting ErrFile to fd 2...
	I1011 22:44:04.929391   84310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:44:04.929574   84310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:44:04.930109   84310 out.go:352] Setting JSON to false
	I1011 22:44:04.931029   84310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8790,"bootTime":1728677855,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:44:04.931114   84310 start.go:139] virtualization: kvm guest
	I1011 22:44:04.933984   84310 out.go:177] * [newest-cni-555648] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:44:04.935346   84310 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:44:04.935374   84310 notify.go:220] Checking for updates...
	I1011 22:44:04.937828   84310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:44:04.938935   84310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:44:04.940245   84310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:44:04.941368   84310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:44:04.942442   84310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:44:04.943917   84310 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:04.944007   84310 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:04.944086   84310 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:04.944154   84310 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:44:04.982814   84310 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 22:44:04.983997   84310 start.go:297] selected driver: kvm2
	I1011 22:44:04.984012   84310 start.go:901] validating driver "kvm2" against <nil>
	I1011 22:44:04.984023   84310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:44:04.984705   84310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:44:04.984803   84310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:44:05.000396   84310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:44:05.000449   84310 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1011 22:44:05.000513   84310 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1011 22:44:05.000787   84310 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1011 22:44:05.000818   84310 cni.go:84] Creating CNI manager for ""
	I1011 22:44:05.000863   84310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:44:05.000871   84310 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 22:44:05.000917   84310 start.go:340] cluster config:
	{Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:44:05.001009   84310 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:44:05.002796   84310 out.go:177] * Starting "newest-cni-555648" primary control-plane node in "newest-cni-555648" cluster
	I1011 22:44:05.003904   84310 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:44:05.003948   84310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 22:44:05.003961   84310 cache.go:56] Caching tarball of preloaded images
	I1011 22:44:05.004039   84310 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:44:05.004053   84310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 22:44:05.004151   84310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json ...
	I1011 22:44:05.004175   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json: {Name:mka41f2ff10a8dbe5874167147443fa9f14151f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:05.004349   84310 start.go:360] acquireMachinesLock for newest-cni-555648: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:44:05.004386   84310 start.go:364] duration metric: took 21.411µs to acquireMachinesLock for "newest-cni-555648"
	I1011 22:44:05.004408   84310 start.go:93] Provisioning new machine with config: &{Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:44:05.004495   84310 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 22:44:05.006123   84310 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 22:44:05.006273   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:05.006311   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:05.020636   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I1011 22:44:05.021105   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:05.021627   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:05.021649   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:05.022052   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:05.022264   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:05.022447   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:05.022680   84310 start.go:159] libmachine.API.Create for "newest-cni-555648" (driver="kvm2")
	I1011 22:44:05.022710   84310 client.go:168] LocalClient.Create starting
	I1011 22:44:05.022748   84310 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 22:44:05.022788   84310 main.go:141] libmachine: Decoding PEM data...
	I1011 22:44:05.022809   84310 main.go:141] libmachine: Parsing certificate...
	I1011 22:44:05.022890   84310 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 22:44:05.022916   84310 main.go:141] libmachine: Decoding PEM data...
	I1011 22:44:05.022935   84310 main.go:141] libmachine: Parsing certificate...
	I1011 22:44:05.022958   84310 main.go:141] libmachine: Running pre-create checks...
	I1011 22:44:05.022974   84310 main.go:141] libmachine: (newest-cni-555648) Calling .PreCreateCheck
	I1011 22:44:05.023352   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:44:05.023740   84310 main.go:141] libmachine: Creating machine...
	I1011 22:44:05.023753   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Create
	I1011 22:44:05.023896   84310 main.go:141] libmachine: (newest-cni-555648) Creating KVM machine...
	I1011 22:44:05.025190   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found existing default KVM network
	I1011 22:44:05.026351   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.026210   84333 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:64:26} reservation:<nil>}
	I1011 22:44:05.027494   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.027413   84333 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4900}
	I1011 22:44:05.027513   84310 main.go:141] libmachine: (newest-cni-555648) DBG | created network xml: 
	I1011 22:44:05.027524   84310 main.go:141] libmachine: (newest-cni-555648) DBG | <network>
	I1011 22:44:05.027532   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   <name>mk-newest-cni-555648</name>
	I1011 22:44:05.027541   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   <dns enable='no'/>
	I1011 22:44:05.027550   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   
	I1011 22:44:05.027563   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1011 22:44:05.027574   84310 main.go:141] libmachine: (newest-cni-555648) DBG |     <dhcp>
	I1011 22:44:05.027592   84310 main.go:141] libmachine: (newest-cni-555648) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1011 22:44:05.027606   84310 main.go:141] libmachine: (newest-cni-555648) DBG |     </dhcp>
	I1011 22:44:05.027634   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   </ip>
	I1011 22:44:05.027665   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   
	I1011 22:44:05.027674   84310 main.go:141] libmachine: (newest-cni-555648) DBG | </network>
	I1011 22:44:05.027682   84310 main.go:141] libmachine: (newest-cni-555648) DBG | 
	I1011 22:44:05.033008   84310 main.go:141] libmachine: (newest-cni-555648) DBG | trying to create private KVM network mk-newest-cni-555648 192.168.50.0/24...
	I1011 22:44:05.101048   84310 main.go:141] libmachine: (newest-cni-555648) DBG | private KVM network mk-newest-cni-555648 192.168.50.0/24 created
	I1011 22:44:05.101080   84310 main.go:141] libmachine: (newest-cni-555648) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648 ...
	I1011 22:44:05.101091   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.101030   84333 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:44:05.101104   84310 main.go:141] libmachine: (newest-cni-555648) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 22:44:05.101239   84310 main.go:141] libmachine: (newest-cni-555648) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 22:44:05.351303   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.351180   84333 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa...
	I1011 22:44:05.506606   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.506497   84333 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/newest-cni-555648.rawdisk...
	I1011 22:44:05.506654   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Writing magic tar header
	I1011 22:44:05.506667   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Writing SSH key tar header
	I1011 22:44:05.506683   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.506644   84333 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648 ...
	I1011 22:44:05.506767   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648
	I1011 22:44:05.506811   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 22:44:05.506828   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648 (perms=drwx------)
	I1011 22:44:05.506842   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:44:05.506857   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 22:44:05.506866   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 22:44:05.506873   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins
	I1011 22:44:05.506880   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home
	I1011 22:44:05.506888   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Skipping /home - not owner
	I1011 22:44:05.506904   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 22:44:05.506912   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 22:44:05.506919   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 22:44:05.506926   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 22:44:05.506932   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 22:44:05.506939   84310 main.go:141] libmachine: (newest-cni-555648) Creating domain...
	I1011 22:44:05.507980   84310 main.go:141] libmachine: (newest-cni-555648) define libvirt domain using xml: 
	I1011 22:44:05.507999   84310 main.go:141] libmachine: (newest-cni-555648) <domain type='kvm'>
	I1011 22:44:05.508009   84310 main.go:141] libmachine: (newest-cni-555648)   <name>newest-cni-555648</name>
	I1011 22:44:05.508017   84310 main.go:141] libmachine: (newest-cni-555648)   <memory unit='MiB'>2200</memory>
	I1011 22:44:05.508031   84310 main.go:141] libmachine: (newest-cni-555648)   <vcpu>2</vcpu>
	I1011 22:44:05.508040   84310 main.go:141] libmachine: (newest-cni-555648)   <features>
	I1011 22:44:05.508063   84310 main.go:141] libmachine: (newest-cni-555648)     <acpi/>
	I1011 22:44:05.508073   84310 main.go:141] libmachine: (newest-cni-555648)     <apic/>
	I1011 22:44:05.508079   84310 main.go:141] libmachine: (newest-cni-555648)     <pae/>
	I1011 22:44:05.508085   84310 main.go:141] libmachine: (newest-cni-555648)     
	I1011 22:44:05.508091   84310 main.go:141] libmachine: (newest-cni-555648)   </features>
	I1011 22:44:05.508101   84310 main.go:141] libmachine: (newest-cni-555648)   <cpu mode='host-passthrough'>
	I1011 22:44:05.508112   84310 main.go:141] libmachine: (newest-cni-555648)   
	I1011 22:44:05.508121   84310 main.go:141] libmachine: (newest-cni-555648)   </cpu>
	I1011 22:44:05.508132   84310 main.go:141] libmachine: (newest-cni-555648)   <os>
	I1011 22:44:05.508148   84310 main.go:141] libmachine: (newest-cni-555648)     <type>hvm</type>
	I1011 22:44:05.508159   84310 main.go:141] libmachine: (newest-cni-555648)     <boot dev='cdrom'/>
	I1011 22:44:05.508168   84310 main.go:141] libmachine: (newest-cni-555648)     <boot dev='hd'/>
	I1011 22:44:05.508177   84310 main.go:141] libmachine: (newest-cni-555648)     <bootmenu enable='no'/>
	I1011 22:44:05.508184   84310 main.go:141] libmachine: (newest-cni-555648)   </os>
	I1011 22:44:05.508192   84310 main.go:141] libmachine: (newest-cni-555648)   <devices>
	I1011 22:44:05.508203   84310 main.go:141] libmachine: (newest-cni-555648)     <disk type='file' device='cdrom'>
	I1011 22:44:05.508218   84310 main.go:141] libmachine: (newest-cni-555648)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/boot2docker.iso'/>
	I1011 22:44:05.508239   84310 main.go:141] libmachine: (newest-cni-555648)       <target dev='hdc' bus='scsi'/>
	I1011 22:44:05.508250   84310 main.go:141] libmachine: (newest-cni-555648)       <readonly/>
	I1011 22:44:05.508259   84310 main.go:141] libmachine: (newest-cni-555648)     </disk>
	I1011 22:44:05.508269   84310 main.go:141] libmachine: (newest-cni-555648)     <disk type='file' device='disk'>
	I1011 22:44:05.508277   84310 main.go:141] libmachine: (newest-cni-555648)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 22:44:05.508290   84310 main.go:141] libmachine: (newest-cni-555648)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/newest-cni-555648.rawdisk'/>
	I1011 22:44:05.508306   84310 main.go:141] libmachine: (newest-cni-555648)       <target dev='hda' bus='virtio'/>
	I1011 22:44:05.508318   84310 main.go:141] libmachine: (newest-cni-555648)     </disk>
	I1011 22:44:05.508332   84310 main.go:141] libmachine: (newest-cni-555648)     <interface type='network'>
	I1011 22:44:05.508344   84310 main.go:141] libmachine: (newest-cni-555648)       <source network='mk-newest-cni-555648'/>
	I1011 22:44:05.508353   84310 main.go:141] libmachine: (newest-cni-555648)       <model type='virtio'/>
	I1011 22:44:05.508361   84310 main.go:141] libmachine: (newest-cni-555648)     </interface>
	I1011 22:44:05.508369   84310 main.go:141] libmachine: (newest-cni-555648)     <interface type='network'>
	I1011 22:44:05.508377   84310 main.go:141] libmachine: (newest-cni-555648)       <source network='default'/>
	I1011 22:44:05.508384   84310 main.go:141] libmachine: (newest-cni-555648)       <model type='virtio'/>
	I1011 22:44:05.508395   84310 main.go:141] libmachine: (newest-cni-555648)     </interface>
	I1011 22:44:05.508405   84310 main.go:141] libmachine: (newest-cni-555648)     <serial type='pty'>
	I1011 22:44:05.508431   84310 main.go:141] libmachine: (newest-cni-555648)       <target port='0'/>
	I1011 22:44:05.508454   84310 main.go:141] libmachine: (newest-cni-555648)     </serial>
	I1011 22:44:05.508472   84310 main.go:141] libmachine: (newest-cni-555648)     <console type='pty'>
	I1011 22:44:05.508487   84310 main.go:141] libmachine: (newest-cni-555648)       <target type='serial' port='0'/>
	I1011 22:44:05.508498   84310 main.go:141] libmachine: (newest-cni-555648)     </console>
	I1011 22:44:05.508508   84310 main.go:141] libmachine: (newest-cni-555648)     <rng model='virtio'>
	I1011 22:44:05.508521   84310 main.go:141] libmachine: (newest-cni-555648)       <backend model='random'>/dev/random</backend>
	I1011 22:44:05.508531   84310 main.go:141] libmachine: (newest-cni-555648)     </rng>
	I1011 22:44:05.508546   84310 main.go:141] libmachine: (newest-cni-555648)     
	I1011 22:44:05.508562   84310 main.go:141] libmachine: (newest-cni-555648)     
	I1011 22:44:05.508581   84310 main.go:141] libmachine: (newest-cni-555648)   </devices>
	I1011 22:44:05.508594   84310 main.go:141] libmachine: (newest-cni-555648) </domain>
	I1011 22:44:05.508608   84310 main.go:141] libmachine: (newest-cni-555648) 
	I1011 22:44:05.512354   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:e5:b8:5b in network default
	I1011 22:44:05.512926   84310 main.go:141] libmachine: (newest-cni-555648) Ensuring networks are active...
	I1011 22:44:05.512951   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:05.513618   84310 main.go:141] libmachine: (newest-cni-555648) Ensuring network default is active
	I1011 22:44:05.513927   84310 main.go:141] libmachine: (newest-cni-555648) Ensuring network mk-newest-cni-555648 is active
	I1011 22:44:05.514434   84310 main.go:141] libmachine: (newest-cni-555648) Getting domain xml...
	I1011 22:44:05.515192   84310 main.go:141] libmachine: (newest-cni-555648) Creating domain...
	I1011 22:44:06.765116   84310 main.go:141] libmachine: (newest-cni-555648) Waiting to get IP...
	I1011 22:44:06.765902   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:06.766355   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:06.766411   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:06.766353   84333 retry.go:31] will retry after 282.599684ms: waiting for machine to come up
	I1011 22:44:07.050927   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:07.051353   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:07.051382   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:07.051307   84333 retry.go:31] will retry after 283.892428ms: waiting for machine to come up
	I1011 22:44:07.336751   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:07.337244   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:07.337271   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:07.337189   84333 retry.go:31] will retry after 408.901556ms: waiting for machine to come up
	I1011 22:44:07.747499   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:07.747990   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:07.748035   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:07.747943   84333 retry.go:31] will retry after 383.080413ms: waiting for machine to come up
	I1011 22:44:08.132453   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:08.132900   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:08.132930   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:08.132832   84333 retry.go:31] will retry after 544.978224ms: waiting for machine to come up
	I1011 22:44:08.679476   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:08.679909   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:08.679931   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:08.679861   84333 retry.go:31] will retry after 809.318003ms: waiting for machine to come up
	I1011 22:44:09.490794   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:09.491432   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:09.491465   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:09.491352   84333 retry.go:31] will retry after 928.395613ms: waiting for machine to come up
	I1011 22:44:10.421620   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:10.422115   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:10.422146   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:10.422098   84333 retry.go:31] will retry after 1.418741116s: waiting for machine to come up
	I1011 22:44:11.842596   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:11.843004   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:11.843033   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:11.842965   84333 retry.go:31] will retry after 1.854251254s: waiting for machine to come up
	I1011 22:44:13.699805   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:13.700297   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:13.700323   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:13.700251   84333 retry.go:31] will retry after 1.878810401s: waiting for machine to come up
	I1011 22:44:15.580873   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:15.581354   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:15.581401   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:15.581305   84333 retry.go:31] will retry after 2.423754064s: waiting for machine to come up
	I1011 22:44:18.006085   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:18.006507   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:18.006527   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:18.006471   84333 retry.go:31] will retry after 2.377932527s: waiting for machine to come up
	I1011 22:44:20.386296   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:20.386704   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:20.386733   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:20.386670   84333 retry.go:31] will retry after 4.448322326s: waiting for machine to come up
	I1011 22:44:24.840159   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:24.840484   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:24.840510   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:24.840447   84333 retry.go:31] will retry after 5.403094469s: waiting for machine to come up
	I1011 22:44:30.244569   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:30.245038   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has current primary IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:30.245063   84310 main.go:141] libmachine: (newest-cni-555648) Found IP for machine: 192.168.50.28
	I1011 22:44:30.245072   84310 main.go:141] libmachine: (newest-cni-555648) Reserving static IP address...
	I1011 22:44:30.245457   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find host DHCP lease matching {name: "newest-cni-555648", mac: "52:54:00:be:f3:e1", ip: "192.168.50.28"} in network mk-newest-cni-555648
	I1011 22:44:30.320948   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Getting to WaitForSSH function...
	I1011 22:44:30.320980   84310 main.go:141] libmachine: (newest-cni-555648) Reserved static IP address: 192.168.50.28
	I1011 22:44:30.321056   84310 main.go:141] libmachine: (newest-cni-555648) Waiting for SSH to be available...
	I1011 22:44:30.324115   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:30.324481   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648
	I1011 22:44:30.324503   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find defined IP address of network mk-newest-cni-555648 interface with MAC address 52:54:00:be:f3:e1
	I1011 22:44:30.324587   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH client type: external
	I1011 22:44:30.324605   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa (-rw-------)
	I1011 22:44:30.324652   84310 main.go:141] libmachine: (newest-cni-555648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:44:30.324664   84310 main.go:141] libmachine: (newest-cni-555648) DBG | About to run SSH command:
	I1011 22:44:30.324676   84310 main.go:141] libmachine: (newest-cni-555648) DBG | exit 0
	I1011 22:44:30.328587   84310 main.go:141] libmachine: (newest-cni-555648) DBG | SSH cmd err, output: exit status 255: 
	I1011 22:44:30.328611   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 22:44:30.328622   84310 main.go:141] libmachine: (newest-cni-555648) DBG | command : exit 0
	I1011 22:44:30.328629   84310 main.go:141] libmachine: (newest-cni-555648) DBG | err     : exit status 255
	I1011 22:44:30.328639   84310 main.go:141] libmachine: (newest-cni-555648) DBG | output  : 
	I1011 22:44:33.329043   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Getting to WaitForSSH function...
	I1011 22:44:33.331662   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.331967   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.332002   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.332116   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH client type: external
	I1011 22:44:33.332141   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa (-rw-------)
	I1011 22:44:33.332194   84310 main.go:141] libmachine: (newest-cni-555648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:44:33.332218   84310 main.go:141] libmachine: (newest-cni-555648) DBG | About to run SSH command:
	I1011 22:44:33.332237   84310 main.go:141] libmachine: (newest-cni-555648) DBG | exit 0
	I1011 22:44:33.458496   84310 main.go:141] libmachine: (newest-cni-555648) DBG | SSH cmd err, output: <nil>: 
	I1011 22:44:33.458819   84310 main.go:141] libmachine: (newest-cni-555648) KVM machine creation complete!
	I1011 22:44:33.459248   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:44:33.459737   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:33.459910   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:33.460038   84310 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 22:44:33.460055   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetState
	I1011 22:44:33.461425   84310 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 22:44:33.461439   84310 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 22:44:33.461444   84310 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 22:44:33.461449   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.463234   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.463575   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.463612   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.463742   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.463917   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.464052   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.464159   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.464337   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.464523   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.464532   84310 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 22:44:33.578109   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:44:33.578138   84310 main.go:141] libmachine: Detecting the provisioner...
	I1011 22:44:33.578149   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.580934   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.581401   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.581441   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.581565   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.581740   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.581897   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.582025   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.582176   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.582341   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.582352   84310 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 22:44:33.692103   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 22:44:33.692201   84310 main.go:141] libmachine: found compatible host: buildroot
	I1011 22:44:33.692214   84310 main.go:141] libmachine: Provisioning with buildroot...
	I1011 22:44:33.692231   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:33.692519   84310 buildroot.go:166] provisioning hostname "newest-cni-555648"
	I1011 22:44:33.692551   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:33.692739   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.695130   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.695430   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.695462   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.695568   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.695731   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.695855   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.695977   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.696117   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.696334   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.696351   84310 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-555648 && echo "newest-cni-555648" | sudo tee /etc/hostname
	I1011 22:44:33.821605   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-555648
	
	I1011 22:44:33.821636   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.824568   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.824961   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.824991   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.825137   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.825311   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.825461   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.825581   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.825743   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.825960   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.825978   84310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-555648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-555648/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-555648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:44:33.944292   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:44:33.944318   84310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:44:33.944354   84310 buildroot.go:174] setting up certificates
	I1011 22:44:33.944367   84310 provision.go:84] configureAuth start
	I1011 22:44:33.944381   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:33.944641   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:33.947604   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.947993   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.948095   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.948184   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.950745   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.951151   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.951169   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.951452   84310 provision.go:143] copyHostCerts
	I1011 22:44:33.951526   84310 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:44:33.951551   84310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:44:33.951649   84310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:44:33.951787   84310 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:44:33.951802   84310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:44:33.951842   84310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:44:33.951934   84310 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:44:33.951944   84310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:44:33.951976   84310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:44:33.952057   84310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-555648 san=[127.0.0.1 192.168.50.28 localhost minikube newest-cni-555648]
	I1011 22:44:34.084547   84310 provision.go:177] copyRemoteCerts
	I1011 22:44:34.084611   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:44:34.084634   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.087660   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.087932   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.087957   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.088144   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.088329   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.088468   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.088592   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.172724   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:44:34.196727   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:44:34.220574   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:44:34.244872   84310 provision.go:87] duration metric: took 300.491073ms to configureAuth
	I1011 22:44:34.244907   84310 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:44:34.245132   84310 config.go:182] Loaded profile config "newest-cni-555648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:34.245241   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.247929   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.248260   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.248292   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.248482   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.248674   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.248847   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.248987   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.249145   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:34.249327   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:34.249347   84310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:44:34.477556   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:44:34.477588   84310 main.go:141] libmachine: Checking connection to Docker...
	I1011 22:44:34.477600   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetURL
	I1011 22:44:34.478982   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using libvirt version 6000000
	I1011 22:44:34.481138   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.481420   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.481460   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.481573   84310 main.go:141] libmachine: Docker is up and running!
	I1011 22:44:34.481586   84310 main.go:141] libmachine: Reticulating splines...
	I1011 22:44:34.481594   84310 client.go:171] duration metric: took 29.458877128s to LocalClient.Create
	I1011 22:44:34.481628   84310 start.go:167] duration metric: took 29.458950635s to libmachine.API.Create "newest-cni-555648"
	I1011 22:44:34.481637   84310 start.go:293] postStartSetup for "newest-cni-555648" (driver="kvm2")
	I1011 22:44:34.481650   84310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:44:34.481665   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.481876   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:44:34.481899   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.483765   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.484017   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.484040   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.484229   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.484395   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.484580   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.484724   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.569527   84310 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:44:34.573502   84310 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:44:34.573523   84310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:44:34.573591   84310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:44:34.573681   84310 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:44:34.573792   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:44:34.583748   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:44:34.607678   84310 start.go:296] duration metric: took 126.024827ms for postStartSetup
	I1011 22:44:34.607727   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:44:34.608341   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:34.610962   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.611299   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.611344   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.611575   84310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json ...
	I1011 22:44:34.611749   84310 start.go:128] duration metric: took 29.607243974s to createHost
	I1011 22:44:34.611771   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.613859   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.614140   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.614165   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.614288   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.614460   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.614603   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.614745   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.614871   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:34.615021   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:34.615031   84310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:44:34.723199   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728686674.705066502
	
	I1011 22:44:34.723223   84310 fix.go:216] guest clock: 1728686674.705066502
	I1011 22:44:34.723233   84310 fix.go:229] Guest: 2024-10-11 22:44:34.705066502 +0000 UTC Remote: 2024-10-11 22:44:34.611760965 +0000 UTC m=+29.722042852 (delta=93.305537ms)
	I1011 22:44:34.723256   84310 fix.go:200] guest clock delta is within tolerance: 93.305537ms
	I1011 22:44:34.723262   84310 start.go:83] releasing machines lock for "newest-cni-555648", held for 29.718864916s
	I1011 22:44:34.723288   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.723537   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:34.726079   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.726429   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.726456   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.726645   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.727101   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.727305   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.727404   84310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:44:34.727456   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.727497   84310 ssh_runner.go:195] Run: cat /version.json
	I1011 22:44:34.727532   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.730097   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730256   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730450   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.730495   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730667   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.730696   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.730743   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730813   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.730888   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.730968   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.731021   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.731090   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.731164   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.731289   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.833089   84310 ssh_runner.go:195] Run: systemctl --version
	I1011 22:44:34.840199   84310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:44:35.002779   84310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:44:35.008829   84310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:44:35.008896   84310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:44:35.025730   84310 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:44:35.025754   84310 start.go:495] detecting cgroup driver to use...
	I1011 22:44:35.025807   84310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:44:35.043855   84310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:44:35.060827   84310 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:44:35.060893   84310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:44:35.077741   84310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:44:35.093355   84310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:44:35.220491   84310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:44:35.375832   84310 docker.go:233] disabling docker service ...
	I1011 22:44:35.375936   84310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:44:35.390879   84310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:44:35.403628   84310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:44:35.524431   84310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:44:35.652171   84310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:44:35.667985   84310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:44:35.688250   84310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:44:35.688301   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.699887   84310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:44:35.699968   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.710522   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.721077   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.731886   84310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:44:35.742574   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.753279   84310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.771374   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.782331   84310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:44:35.792028   84310 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:44:35.792091   84310 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:44:35.807455   84310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:44:35.817383   84310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:44:35.940408   84310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:44:36.053490   84310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:44:36.053564   84310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:44:36.059204   84310 start.go:563] Will wait 60s for crictl version
	I1011 22:44:36.059287   84310 ssh_runner.go:195] Run: which crictl
	I1011 22:44:36.063532   84310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:44:36.110367   84310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:44:36.110444   84310 ssh_runner.go:195] Run: crio --version
	I1011 22:44:36.142268   84310 ssh_runner.go:195] Run: crio --version
	I1011 22:44:36.180074   84310 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:44:36.181627   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:36.184426   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:36.184717   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:36.184742   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:36.184965   84310 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:44:36.190248   84310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:44:36.208226   84310 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1011 22:44:36.209670   84310 kubeadm.go:883] updating cluster {Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:44:36.209767   84310 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:44:36.209828   84310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:44:36.254555   84310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:44:36.254608   84310 ssh_runner.go:195] Run: which lz4
	I1011 22:44:36.259802   84310 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:44:36.264389   84310 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:44:36.264415   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:44:37.669082   84310 crio.go:462] duration metric: took 1.409306498s to copy over tarball
	I1011 22:44:37.669167   84310 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:44:39.870555   84310 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.201361185s)
	I1011 22:44:39.870581   84310 crio.go:469] duration metric: took 2.201468062s to extract the tarball
	I1011 22:44:39.870588   84310 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:44:39.913516   84310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:44:39.958360   84310 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:44:39.958382   84310 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:44:39.958388   84310 kubeadm.go:934] updating node { 192.168.50.28 8443 v1.31.1 crio true true} ...
	I1011 22:44:39.958517   84310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-555648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:44:39.958625   84310 ssh_runner.go:195] Run: crio config
	I1011 22:44:40.012636   84310 cni.go:84] Creating CNI manager for ""
	I1011 22:44:40.012658   84310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:44:40.012669   84310 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1011 22:44:40.012698   84310 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.28 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-555648 NodeName:newest-cni-555648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.50.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:44:40.012857   84310 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-555648"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:44:40.012930   84310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:44:40.023146   84310 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:44:40.023217   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:44:40.033101   84310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1011 22:44:40.051668   84310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:44:40.069112   84310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I1011 22:44:40.086253   84310 ssh_runner.go:195] Run: grep 192.168.50.28	control-plane.minikube.internal$ /etc/hosts
	I1011 22:44:40.089999   84310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:44:40.102428   84310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:44:40.241607   84310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:44:40.259509   84310 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648 for IP: 192.168.50.28
	I1011 22:44:40.259536   84310 certs.go:194] generating shared ca certs ...
	I1011 22:44:40.259556   84310 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.259739   84310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:44:40.259800   84310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:44:40.259813   84310 certs.go:256] generating profile certs ...
	I1011 22:44:40.259883   84310 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.key
	I1011 22:44:40.259900   84310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.crt with IP's: []
	I1011 22:44:40.466669   84310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.crt ...
	I1011 22:44:40.466701   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.crt: {Name:mk95e138cc50429f377f5e7d0c8993087624225f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.466878   84310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.key ...
	I1011 22:44:40.466892   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.key: {Name:mk46fbd4d30e093ef49ea6ee7ecfd555d58c9a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.467004   84310 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key.83610948
	I1011 22:44:40.467043   84310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt.83610948 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.28]
	I1011 22:44:40.604685   84310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt.83610948 ...
	I1011 22:44:40.604712   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt.83610948: {Name:mk85e1fe8f17adb2807d2baff60b4089bc99c52b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.604864   84310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key.83610948 ...
	I1011 22:44:40.604876   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key.83610948: {Name:mk6f95e3c708f98d1902ec9b772d4ec23f1e59e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.604946   84310 certs.go:381] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt.83610948 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt
	I1011 22:44:40.605036   84310 certs.go:385] copying /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key.83610948 -> /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key
	I1011 22:44:40.605093   84310 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.key
	I1011 22:44:40.605111   84310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.crt with IP's: []
	I1011 22:44:40.755536   84310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.crt ...
	I1011 22:44:40.755567   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.crt: {Name:mk7f2d3a05050263310a5a42efdd0c5251d66018 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.755715   84310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.key ...
	I1011 22:44:40.755726   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.key: {Name:mk7abedc1ade00dc1888639ec87a014c20073259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:40.755925   84310 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:44:40.755978   84310 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:44:40.755992   84310 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:44:40.756027   84310 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:44:40.756061   84310 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:44:40.756085   84310 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:44:40.756135   84310 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:44:40.756836   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:44:40.786269   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:44:40.813387   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:44:40.839793   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:44:40.866621   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:44:40.891898   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:44:40.916057   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:44:40.939737   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:44:40.963772   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:44:40.991362   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:44:41.015278   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:44:41.042745   84310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:44:41.078599   84310 ssh_runner.go:195] Run: openssl version
	I1011 22:44:41.085986   84310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:44:41.097076   84310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:44:41.101516   84310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:44:41.101571   84310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:44:41.107542   84310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:44:41.119437   84310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:44:41.130070   84310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:44:41.134602   84310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:44:41.134678   84310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:44:41.140470   84310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:44:41.151680   84310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:44:41.164046   84310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:44:41.168565   84310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:44:41.168603   84310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:44:41.174265   84310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:44:41.185571   84310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:44:41.190215   84310 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 22:44:41.190276   84310 kubeadm.go:392] StartCluster: {Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:44:41.190371   84310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:44:41.190432   84310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:44:41.226501   84310 cri.go:89] found id: ""
	I1011 22:44:41.226561   84310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:44:41.236383   84310 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:44:41.246177   84310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:44:41.257300   84310 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:44:41.257318   84310 kubeadm.go:157] found existing configuration files:
	
	I1011 22:44:41.257354   84310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:44:41.266114   84310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:44:41.266167   84310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:44:41.275311   84310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:44:41.283921   84310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:44:41.283962   84310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:44:41.294260   84310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:44:41.303043   84310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:44:41.303116   84310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:44:41.313391   84310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:44:41.322732   84310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:44:41.322785   84310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:44:41.331972   84310 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:44:41.442012   84310 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:44:41.442231   84310 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:44:41.547338   84310 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:44:41.547538   84310 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:44:41.547676   84310 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:44:41.556819   84310 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:44:41.718058   84310 out.go:235]   - Generating certificates and keys ...
	I1011 22:44:41.718182   84310 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:44:41.718288   84310 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:44:41.718379   84310 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 22:44:41.718511   84310 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 22:44:41.937377   84310 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 22:44:42.006317   84310 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 22:44:42.171748   84310 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 22:44:42.171887   84310 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-555648] and IPs [192.168.50.28 127.0.0.1 ::1]
	I1011 22:44:42.290666   84310 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 22:44:42.290854   84310 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-555648] and IPs [192.168.50.28 127.0.0.1 ::1]
	I1011 22:44:42.422545   84310 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 22:44:42.654768   84310 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 22:44:42.698771   84310 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 22:44:42.699016   84310 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:44:42.856476   84310 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:44:42.971661   84310 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:44:43.256263   84310 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:44:43.388059   84310 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:44:43.630591   84310 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:44:43.631162   84310 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:44:43.633990   84310 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:44:43.635712   84310 out.go:235]   - Booting up control plane ...
	I1011 22:44:43.635823   84310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:44:43.635906   84310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:44:43.635987   84310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:44:43.651146   84310 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:44:43.658519   84310 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:44:43.658578   84310 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:44:43.778687   84310 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:44:43.778857   84310 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:44:44.779317   84310 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001145757s
	I1011 22:44:44.779408   84310 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:44:49.780156   84310 kubeadm.go:310] [api-check] The API server is healthy after 5.001306765s
	I1011 22:44:49.808065   84310 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:44:49.829787   84310 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:44:49.858793   84310 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:44:49.859038   84310 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-555648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:44:49.873117   84310 kubeadm.go:310] [bootstrap-token] Using token: osr22b.h464m3id6oysv96d
	I1011 22:44:49.874366   84310 out.go:235]   - Configuring RBAC rules ...
	I1011 22:44:49.874521   84310 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:44:49.887655   84310 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:44:49.901572   84310 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:44:49.904855   84310 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:44:49.908791   84310 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:44:49.913572   84310 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:44:50.188858   84310 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:44:50.637889   84310 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:44:51.186738   84310 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:44:51.186762   84310 kubeadm.go:310] 
	I1011 22:44:51.186833   84310 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:44:51.186842   84310 kubeadm.go:310] 
	I1011 22:44:51.186961   84310 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:44:51.186979   84310 kubeadm.go:310] 
	I1011 22:44:51.187000   84310 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:44:51.187102   84310 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:44:51.187188   84310 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:44:51.187202   84310 kubeadm.go:310] 
	I1011 22:44:51.187291   84310 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:44:51.187307   84310 kubeadm.go:310] 
	I1011 22:44:51.187371   84310 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:44:51.187381   84310 kubeadm.go:310] 
	I1011 22:44:51.187458   84310 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:44:51.187555   84310 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:44:51.187647   84310 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:44:51.187656   84310 kubeadm.go:310] 
	I1011 22:44:51.187768   84310 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:44:51.187880   84310 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:44:51.187889   84310 kubeadm.go:310] 
	I1011 22:44:51.188012   84310 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token osr22b.h464m3id6oysv96d \
	I1011 22:44:51.188171   84310 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:44:51.188209   84310 kubeadm.go:310] 	--control-plane 
	I1011 22:44:51.188218   84310 kubeadm.go:310] 
	I1011 22:44:51.188320   84310 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:44:51.188331   84310 kubeadm.go:310] 
	I1011 22:44:51.188431   84310 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token osr22b.h464m3id6oysv96d \
	I1011 22:44:51.188599   84310 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:44:51.189044   84310 kubeadm.go:310] W1011 22:44:41.424028     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:44:51.189325   84310 kubeadm.go:310] W1011 22:44:41.427437     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:44:51.189472   84310 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:44:51.189493   84310 cni.go:84] Creating CNI manager for ""
	I1011 22:44:51.189512   84310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:44:51.191362   84310 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:44:51.192580   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:44:51.202979   84310 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:44:51.225633   84310 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:44:51.225732   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-555648 minikube.k8s.io/updated_at=2024_10_11T22_44_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=newest-cni-555648 minikube.k8s.io/primary=true
	I1011 22:44:51.225876   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:51.470107   84310 ops.go:34] apiserver oom_adj: -16
	I1011 22:44:51.470265   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:51.971130   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:52.470334   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:52.970647   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:53.470447   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:53.971152   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:54.470745   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:54.971245   84310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:44:55.072808   84310 kubeadm.go:1113] duration metric: took 3.846972882s to wait for elevateKubeSystemPrivileges
	I1011 22:44:55.072851   84310 kubeadm.go:394] duration metric: took 13.882578616s to StartCluster
	I1011 22:44:55.072876   84310 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:55.072960   84310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:44:55.075133   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:55.075397   84310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 22:44:55.075396   84310 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:44:55.075443   84310 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:44:55.075649   84310 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-555648"
	I1011 22:44:55.075656   84310 addons.go:69] Setting default-storageclass=true in profile "newest-cni-555648"
	I1011 22:44:55.075662   84310 config.go:182] Loaded profile config "newest-cni-555648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:55.075667   84310 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-555648"
	I1011 22:44:55.075678   84310 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-555648"
	I1011 22:44:55.075699   84310 host.go:66] Checking if "newest-cni-555648" exists ...
	I1011 22:44:55.076061   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:55.076101   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:55.076167   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:55.076207   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:55.076887   84310 out.go:177] * Verifying Kubernetes components...
	I1011 22:44:55.078371   84310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:44:55.091819   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1011 22:44:55.091826   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I1011 22:44:55.092255   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:55.092307   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:55.092798   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:55.092815   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:55.092932   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:55.092954   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:55.093149   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:55.093304   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:55.093339   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetState
	I1011 22:44:55.093864   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:55.093901   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:55.097525   84310 addons.go:234] Setting addon default-storageclass=true in "newest-cni-555648"
	I1011 22:44:55.097566   84310 host.go:66] Checking if "newest-cni-555648" exists ...
	I1011 22:44:55.097928   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:55.097969   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:55.109676   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I1011 22:44:55.110269   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:55.110835   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:55.110863   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:55.111208   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:55.111393   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetState
	I1011 22:44:55.113159   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:55.113242   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I1011 22:44:55.113556   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:55.113985   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:55.114005   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:55.114311   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:55.114836   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:55.114876   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:55.115150   84310 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:44:55.116380   84310 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:44:55.116396   84310 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:44:55.116419   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:55.119783   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:55.120264   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:55.120289   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:55.120483   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:55.120629   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:55.120775   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:55.120864   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:55.131729   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I1011 22:44:55.132268   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:55.133070   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:55.133085   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:55.133406   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:55.133597   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetState
	I1011 22:44:55.135055   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:55.135225   84310 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:44:55.135247   84310 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:44:55.135261   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:55.140492   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:55.140842   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:55.140885   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:55.140989   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:55.141125   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:55.141259   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:55.141366   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:55.289551   84310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 22:44:55.317428   84310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:44:55.501573   84310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:44:55.509924   84310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:44:55.843670   84310 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1011 22:44:55.844928   84310 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:44:55.844990   84310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:44:56.349356   84310 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-555648" context rescaled to 1 replicas
	I1011 22:44:56.644375   84310 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.142765332s)
	I1011 22:44:56.644418   84310 main.go:141] libmachine: Making call to close driver server
	I1011 22:44:56.644439   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Close
	I1011 22:44:56.644478   84310 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.134508249s)
	I1011 22:44:56.644528   84310 api_server.go:72] duration metric: took 1.569050427s to wait for apiserver process to appear ...
	I1011 22:44:56.644543   84310 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:44:56.644564   84310 api_server.go:253] Checking apiserver healthz at https://192.168.50.28:8443/healthz ...
	I1011 22:44:56.644529   84310 main.go:141] libmachine: Making call to close driver server
	I1011 22:44:56.644616   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Close
	I1011 22:44:56.644743   84310 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:44:56.644758   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Closing plugin on server side
	I1011 22:44:56.644769   84310 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:44:56.644778   84310 main.go:141] libmachine: Making call to close driver server
	I1011 22:44:56.644784   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Close
	I1011 22:44:56.644967   84310 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:44:56.644982   84310 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:44:56.644991   84310 main.go:141] libmachine: Making call to close driver server
	I1011 22:44:56.644999   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Close
	I1011 22:44:56.644998   84310 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:44:56.645011   84310 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:44:56.644969   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Closing plugin on server side
	I1011 22:44:56.645276   84310 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:44:56.645296   84310 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:44:56.645299   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Closing plugin on server side
	I1011 22:44:56.680101   84310 api_server.go:279] https://192.168.50.28:8443/healthz returned 200:
	ok
	I1011 22:44:56.694054   84310 main.go:141] libmachine: Making call to close driver server
	I1011 22:44:56.694074   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Close
	I1011 22:44:56.694457   84310 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:44:56.694481   84310 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:44:56.696228   84310 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1011 22:44:56.697348   84310 addons.go:510] duration metric: took 1.621930059s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1011 22:44:56.700775   84310 api_server.go:141] control plane version: v1.31.1
	I1011 22:44:56.700802   84310 api_server.go:131] duration metric: took 56.251235ms to wait for apiserver health ...
	I1011 22:44:56.700812   84310 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:44:56.733577   84310 system_pods.go:59] 8 kube-system pods found
	I1011 22:44:56.733634   84310 system_pods.go:61] "coredns-7c65d6cfc9-75jmc" [36ccc0dd-1a9a-48fb-865a-8384ea230646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:44:56.733650   84310 system_pods.go:61] "coredns-7c65d6cfc9-pkgwk" [4aa8b763-9ea4-41a5-b6f5-0a19a2136c30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:44:56.733670   84310 system_pods.go:61] "etcd-newest-cni-555648" [a8b9a8e2-f6ee-4398-b226-6ed207e69ff3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:44:56.733683   84310 system_pods.go:61] "kube-apiserver-newest-cni-555648" [164505c8-c99f-45ec-a6c4-070cc229e725] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:44:56.733695   84310 system_pods.go:61] "kube-controller-manager-newest-cni-555648" [a7b1137c-1238-424b-ac33-605197f2cc1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:44:56.733705   84310 system_pods.go:61] "kube-proxy-zcm66" [72f11405-2f4d-4349-b98e-4920a7489480] Running
	I1011 22:44:56.733718   84310 system_pods.go:61] "kube-scheduler-newest-cni-555648" [7e810087-9e86-4b1c-a714-a5a6cacbe9a6] Running
	I1011 22:44:56.733726   84310 system_pods.go:61] "storage-provisioner" [eaaac3ee-875c-4b99-8b54-9a2379b192b4] Pending
	I1011 22:44:56.733734   84310 system_pods.go:74] duration metric: took 32.9157ms to wait for pod list to return data ...
	I1011 22:44:56.733748   84310 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:44:56.748562   84310 default_sa.go:45] found service account: "default"
	I1011 22:44:56.748590   84310 default_sa.go:55] duration metric: took 14.832425ms for default service account to be created ...
	I1011 22:44:56.748604   84310 kubeadm.go:582] duration metric: took 1.673125801s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1011 22:44:56.748622   84310 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:44:56.776427   84310 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:44:56.776467   84310 node_conditions.go:123] node cpu capacity is 2
	I1011 22:44:56.776481   84310 node_conditions.go:105] duration metric: took 27.853531ms to run NodePressure ...
	I1011 22:44:56.776496   84310 start.go:241] waiting for startup goroutines ...
	I1011 22:44:56.776505   84310 start.go:246] waiting for cluster config update ...
	I1011 22:44:56.776518   84310 start.go:255] writing updated cluster config ...
	I1011 22:44:56.776823   84310 ssh_runner.go:195] Run: rm -f paused
	I1011 22:44:56.842381   84310 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:44:56.844253   84310 out.go:177] * Done! kubectl is now configured to use "newest-cni-555648" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.319561257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686699319535840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1a39ea6-dd9f-48a2-9354-eb31ca755de4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.320046293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=844ac6ad-9c7b-4920-a7fc-dc51e368295d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.320095779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=844ac6ad-9c7b-4920-a7fc-dc51e368295d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.320286152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=844ac6ad-9c7b-4920-a7fc-dc51e368295d name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.359303109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff0cbdf6-df5b-4a66-9abc-ab24ccb91e99 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.359373615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff0cbdf6-df5b-4a66-9abc-ab24ccb91e99 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.360927386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=652b9736-12bd-4c4b-972d-bb5b677b1803 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.361320319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686699361296771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=652b9736-12bd-4c4b-972d-bb5b677b1803 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.361905518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04652331-85ed-4b94-a982-c100e818c776 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.361958043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04652331-85ed-4b94-a982-c100e818c776 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.362178357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04652331-85ed-4b94-a982-c100e818c776 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.405263821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3707b3e1-e3e7-4e27-8e82-b91e939a582e name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.405500176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3707b3e1-e3e7-4e27-8e82-b91e939a582e name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.406634526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80e09295-1406-46fe-9f82-99228ce74f06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.407649029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686699407622289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80e09295-1406-46fe-9f82-99228ce74f06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.408178912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fdbca5a-3731-47ad-8937-5974e2821f64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.408252463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fdbca5a-3731-47ad-8937-5974e2821f64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.409253717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fdbca5a-3731-47ad-8937-5974e2821f64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.446345190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da1a2b6c-44f4-47ac-8898-19b98cde2450 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.446439497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da1a2b6c-44f4-47ac-8898-19b98cde2450 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.447506213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=101ca2a3-722a-4760-8b2e-46b6dacf7fb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.448058674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686699448033260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=101ca2a3-722a-4760-8b2e-46b6dacf7fb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.448551475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7435a061-277d-4a67-8527-67a2084c6e00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.448620806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7435a061-277d-4a67-8527-67a2084c6e00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:59 embed-certs-223942 crio[709]: time="2024-10-11 22:44:59.448871418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84,PodSandboxId:ef9d32181014b23be494a933a79e560681d76ca34ca87fce3d1e8971c59f4c68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753798535749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcct7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addf150f-9f60-4184-9a87-8034b9d3fd8f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94,PodSandboxId:5592caa5415ef528f653a974d1b5995aff970dc992bb7a7caac1804318c28bd5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685753762882348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bchd4,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 9888edee-2d83-4ac7-9dcf-14a0d4c1adfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761,PodSandboxId:908491122f8dee10ed9bf12a92a0a526af8c11962064808cf44996ea85a1e5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728685752978792640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60223d53-4645-45d1-8546-9050636a6205,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822,PodSandboxId:7c7053b8740697728cd6895c6884ff431692d21d408b671904746f45064df74b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728685752139079430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qv4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76dc11bd-3597-4268-839e-9bace3c3e897,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea,PodSandboxId:ec489aa35ca0afceede57c1929b919da6861dd342e04b51ba543c8df2ea536fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685741132764703,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6722c3d1b62cd47917b39c0f51f93ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056,PodSandboxId:153e960b9abe6bc7a6bc6c0d2c51ea49485bf45722bbc697c1663e2f17a40f0b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685741081556230,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582422b82d7f0906687c8ae26614499a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249,PodSandboxId:e7dfebeb54ef4e3c24a2c2872bad189cabb6cf43860b095934ad423d2304f622,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685741026690440,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e82ae9ee47430ff60ce582edee6d06eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8,PodSandboxId:93356c94c9d188156ba03a36f60169338ef43e7f27e4d98c46e16b5e448fc4c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685740978812066,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc,PodSandboxId:5670c3d78eb5b93a55415e6346bd5e728e6f5d0fc31ad69c50d8cbbf8e7e6cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685453524303399,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8041d588bd36831154f63ec92c713,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7435a061-277d-4a67-8527-67a2084c6e00 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	73cb9493c5b87       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   ef9d32181014b       coredns-7c65d6cfc9-qcct7
	1aac20bc993c9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   5592caa5415ef       coredns-7c65d6cfc9-bchd4
	6e3de90b28419       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   908491122f8de       storage-provisioner
	415392e78e166       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago      Running             kube-proxy                0                   7c7053b874069       kube-proxy-8qv4k
	b8f7d4a5b42d2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   ec489aa35ca0a       kube-scheduler-embed-certs-223942
	572eaaff73b31       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   153e960b9abe6       etcd-embed-certs-223942
	6998c3a00bca0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   e7dfebeb54ef4       kube-controller-manager-embed-certs-223942
	d211bb9e2c693       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   93356c94c9d18       kube-apiserver-embed-certs-223942
	885cb2372a071       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 minutes ago      Exited              kube-apiserver            1                   5670c3d78eb5b       kube-apiserver-embed-certs-223942
	
	
	==> coredns [1aac20bc993c95770ff32c873dcf2aa13efd8822dac80786171adba32f13ed94] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [73cb9493c5b874442dfdb8dd9b01ba1d879c2f6dc0dbba10cb7a1c8eb1e5eb84] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-223942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-223942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=embed-certs-223942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:29:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-223942
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:44:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:44:34 +0000   Fri, 11 Oct 2024 22:29:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:44:34 +0000   Fri, 11 Oct 2024 22:29:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:44:34 +0000   Fri, 11 Oct 2024 22:29:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:44:34 +0000   Fri, 11 Oct 2024 22:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.238
	  Hostname:    embed-certs-223942
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 69c0ac3f57ef4f8e90a55b80a28acbfc
	  System UUID:                69c0ac3f-57ef-4f8e-90a5-5b80a28acbfc
	  Boot ID:                    e156c070-0f57-421d-b90c-d63d5affe806
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bchd4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-qcct7                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-223942                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-223942             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-223942    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-8qv4k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-223942             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-5s6hn               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-223942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-223942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-223942 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-223942 event: Registered Node embed-certs-223942 in Controller
	
	
	==> dmesg <==
	[  +0.051075] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040291] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.856390] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.412189] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.615501] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct11 22:24] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.062937] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065859] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.176872] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.164928] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.309075] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[  +4.189975] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +1.846284] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.061722] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.500288] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.384181] kauditd_printk_skb: 85 callbacks suppressed
	[Oct11 22:28] systemd-fstab-generator[2569]: Ignoring "noauto" option for root device
	[  +0.059758] kauditd_printk_skb: 9 callbacks suppressed
	[Oct11 22:29] systemd-fstab-generator[2890]: Ignoring "noauto" option for root device
	[  +0.096253] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.812748] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[  +0.773515] kauditd_printk_skb: 34 callbacks suppressed
	[  +9.654875] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [572eaaff73b31e7475ea881db13dce47d44ee661b50c5c3bcaa2ae4237b75056] <==
	{"level":"info","ts":"2024-10-11T22:29:01.490990Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:01.491558Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:01.494295Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:01.495020Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:29:01.495192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.238:2379"}
	{"level":"info","ts":"2024-10-11T22:29:01.495274Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fce591e0af426ce5","local-member-id":"e2f0763a23b2a427","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:01.507564Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:01.507621Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:39:02.079280Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-10-11T22:39:02.087804Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"8.11394ms","hash":91417643,"current-db-size-bytes":2379776,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2379776,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-11T22:39:02.087858Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":91417643,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-10-11T22:44:02.086641Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2024-10-11T22:44:02.090418Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":932,"took":"3.375279ms","hash":3195437873,"current-db-size-bytes":2379776,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-11T22:44:02.090472Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3195437873,"revision":932,"compact-revision":689}
	{"level":"warn","ts":"2024-10-11T22:44:41.258503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.177452ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11828584014884986793 > lease_revoke:<id:2427927db3155f4c>","response":"size:29"}
	{"level":"info","ts":"2024-10-11T22:44:41.258864Z","caller":"traceutil/trace.go:171","msg":"trace[1560725255] linearizableReadLoop","detail":"{readStateIndex:1412; appliedIndex:1411; }","duration":"166.91844ms","start":"2024-10-11T22:44:41.091919Z","end":"2024-10-11T22:44:41.258838Z","steps":["trace[1560725255] 'read index received'  (duration: 40.113614ms)","trace[1560725255] 'applied index is now lower than readState.Index'  (duration: 126.803799ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T22:44:41.259250Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.290654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T22:44:41.260053Z","caller":"traceutil/trace.go:171","msg":"trace[1340156503] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1208; }","duration":"168.12962ms","start":"2024-10-11T22:44:41.091914Z","end":"2024-10-11T22:44:41.260044Z","steps":["trace[1340156503] 'agreement among raft nodes before linearized reading'  (duration: 167.271998ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T22:44:42.061286Z","caller":"traceutil/trace.go:171","msg":"trace[887573088] transaction","detail":"{read_only:false; response_revision:1209; number_of_response:1; }","duration":"223.617936ms","start":"2024-10-11T22:44:41.837446Z","end":"2024-10-11T22:44:42.061064Z","steps":["trace[887573088] 'process raft request'  (duration: 223.489715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T22:44:42.241259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.123229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T22:44:42.241664Z","caller":"traceutil/trace.go:171","msg":"trace[2011183491] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1209; }","duration":"149.533204ms","start":"2024-10-11T22:44:42.092115Z","end":"2024-10-11T22:44:42.241648Z","steps":["trace[2011183491] 'range keys from in-memory index tree'  (duration: 149.070592ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T22:44:42.367101Z","caller":"traceutil/trace.go:171","msg":"trace[1125637269] linearizableReadLoop","detail":"{readStateIndex:1414; appliedIndex:1413; }","duration":"101.107077ms","start":"2024-10-11T22:44:42.265976Z","end":"2024-10-11T22:44:42.367083Z","steps":["trace[1125637269] 'read index received'  (duration: 100.876796ms)","trace[1125637269] 'applied index is now lower than readState.Index'  (duration: 229.391µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-11T22:44:42.367367Z","caller":"traceutil/trace.go:171","msg":"trace[1785789637] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"117.807575ms","start":"2024-10-11T22:44:42.249546Z","end":"2024-10-11T22:44:42.367354Z","steps":["trace[1785789637] 'process raft request'  (duration: 117.365007ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T22:44:42.368864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.429077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-11T22:44:42.368935Z","caller":"traceutil/trace.go:171","msg":"trace[1428421257] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1210; }","duration":"102.978008ms","start":"2024-10-11T22:44:42.265946Z","end":"2024-10-11T22:44:42.368924Z","steps":["trace[1428421257] 'agreement among raft nodes before linearized reading'  (duration: 101.373093ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:44:59 up 21 min,  0 users,  load average: 0.32, 0.16, 0.10
	Linux embed-certs-223942 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [885cb2372a0716a3da16d448bd39b4ed471accd744571a077d9f2e9f67a585dc] <==
	W1011 22:28:53.601529       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.632043       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.636636       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.647270       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.679589       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.787872       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.799449       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.806947       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.821617       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.850538       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.934477       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.954034       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.980095       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:53.992506       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.012112       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.033530       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.060198       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.109887       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.180535       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.193196       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.401835       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.432394       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:54.640242       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:57.368433       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:28:58.196914       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d211bb9e2c693529cd6f5c9f22fe1d2d241eb1519e77b72b1461e430c5ba92d8] <==
	I1011 22:40:04.945857       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:40:04.945874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:42:04.946042       1 handler_proxy.go:99] no RequestInfo found in the context
	W1011 22:42:04.946186       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:42:04.946262       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1011 22:42:04.946312       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1011 22:42:04.948156       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:42:04.948172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:44:03.945831       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:44:03.946145       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1011 22:44:04.947552       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:44:04.947620       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:44:04.947557       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:44:04.947702       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:44:04.948961       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:44:04.949016       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6998c3a00bca0c634e349afdb4d1d08a1b5d0699f4b2314ac184f3896f44f249] <==
	E1011 22:39:40.980137       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:39:41.464142       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:39:59.686450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="138.164µs"
	E1011 22:40:10.987794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:40:11.472549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:40:13.685055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="76.514µs"
	E1011 22:40:40.995120       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:40:41.482550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:41:11.001772       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:41:11.491140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:41:41.008550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:41:41.499973       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:42:11.015592       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:42:11.508876       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:42:41.021799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:42:41.519256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:43:11.028481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:43:11.526991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:43:41.035206       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:43:41.535205       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:44:11.041623       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:44:11.543772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:44:34.480510       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-223942"
	E1011 22:44:41.047567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:44:41.552779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [415392e78e1665d55393d01cda41aeadb451586a4a1cee467dd4132dfa1c1822] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:29:12.575187       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:29:12.591672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.238"]
	E1011 22:29:12.591937       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:29:12.663029       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:29:12.663114       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:29:12.663195       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:29:12.667410       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:29:12.667793       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:29:12.667807       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:29:12.671088       1 config.go:199] "Starting service config controller"
	I1011 22:29:12.671146       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:29:12.671181       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:29:12.671197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:29:12.672070       1 config.go:328] "Starting node config controller"
	I1011 22:29:12.672141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:29:12.772166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 22:29:12.772190       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:29:12.772208       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b8f7d4a5b42d20d5437693b198a80a6cf06ae65a335e1825ef642dc1a39295ea] <==
	W1011 22:29:04.771428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 22:29:04.771483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.807438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:04.807491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.810891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:29:04.810942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.822192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:29:04.822326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.867028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:04.867237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.911147       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:29:04.911304       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 22:29:04.924931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 22:29:04.925124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.953828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:04.953878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:04.990061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 22:29:04.990124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:05.105943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:29:05.106091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:05.157158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 22:29:05.157522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:05.280174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 22:29:05.280789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1011 22:29:07.934588       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:43:57 embed-certs-223942 kubelet[2897]: E1011 22:43:57.671280    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]: E1011 22:44:06.690779    2897 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]: E1011 22:44:06.871342    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686646870898626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:06 embed-certs-223942 kubelet[2897]: E1011 22:44:06.871394    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686646870898626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:12 embed-certs-223942 kubelet[2897]: E1011 22:44:12.674994    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:44:16 embed-certs-223942 kubelet[2897]: E1011 22:44:16.872953    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686656872480375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:16 embed-certs-223942 kubelet[2897]: E1011 22:44:16.873299    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686656872480375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:23 embed-certs-223942 kubelet[2897]: E1011 22:44:23.672507    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:44:26 embed-certs-223942 kubelet[2897]: E1011 22:44:26.875429    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686666874994815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:26 embed-certs-223942 kubelet[2897]: E1011 22:44:26.875501    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686666874994815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:36 embed-certs-223942 kubelet[2897]: E1011 22:44:36.672270    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:44:36 embed-certs-223942 kubelet[2897]: E1011 22:44:36.877574    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686676877215132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:36 embed-certs-223942 kubelet[2897]: E1011 22:44:36.877609    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686676877215132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:46 embed-certs-223942 kubelet[2897]: E1011 22:44:46.880303    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686686879904764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:46 embed-certs-223942 kubelet[2897]: E1011 22:44:46.880595    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686686879904764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:49 embed-certs-223942 kubelet[2897]: E1011 22:44:49.697837    2897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 11 22:44:49 embed-certs-223942 kubelet[2897]: E1011 22:44:49.697979    2897 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 11 22:44:49 embed-certs-223942 kubelet[2897]: E1011 22:44:49.698225    2897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-264b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-5s6hn_kube-system(526f3ae3-7af0-4542-87d4-66b0281b4058): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 11 22:44:49 embed-certs-223942 kubelet[2897]: E1011 22:44:49.699749    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-5s6hn" podUID="526f3ae3-7af0-4542-87d4-66b0281b4058"
	Oct 11 22:44:56 embed-certs-223942 kubelet[2897]: E1011 22:44:56.882838    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686696881982855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:56 embed-certs-223942 kubelet[2897]: E1011 22:44:56.883332    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686696881982855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6e3de90b28419d0f442411e1e5060a0738a002f0e59b220a3236d5d296179761] <==
	I1011 22:29:13.061397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:29:13.071174       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:29:13.071374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:29:13.080887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:29:13.081101       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-223942_2f3c7e46-1b65-4011-a8b1-d04225923a21!
	I1011 22:29:13.083659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6324dea6-af7b-49be-b2ed-a9f9889bb6a5", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-223942_2f3c7e46-1b65-4011-a8b1-d04225923a21 became leader
	I1011 22:29:13.181855       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-223942_2f3c7e46-1b65-4011-a8b1-d04225923a21!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223942 -n embed-certs-223942
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-223942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5s6hn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-223942 describe pod metrics-server-6867b74b74-5s6hn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-223942 describe pod metrics-server-6867b74b74-5s6hn: exit status 1 (62.190219ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5s6hn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-223942 describe pod metrics-server-6867b74b74-5s6hn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (398.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (416.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-11 22:45:39.629830238 +0000 UTC m=+6459.644187746
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-070708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.658µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-070708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-070708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-070708 logs -n 25: (1.274376003s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	| start   | -p newest-cni-555648 --memory=2200 --alsologtostderr   | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	| addons  | enable metrics-server -p newest-cni-555648             | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-555648                                   | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:45 UTC | 11 Oct 24 22:45 UTC |
	| addons  | enable dashboard -p newest-cni-555648                  | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:45 UTC | 11 Oct 24 22:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-555648 --memory=2200 --alsologtostderr   | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:45:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:45:08.830156   85254 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:45:08.830261   85254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:45:08.830271   85254 out.go:358] Setting ErrFile to fd 2...
	I1011 22:45:08.830278   85254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:45:08.830459   85254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:45:08.831049   85254 out.go:352] Setting JSON to false
	I1011 22:45:08.832177   85254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8854,"bootTime":1728677855,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:45:08.832316   85254 start.go:139] virtualization: kvm guest
	I1011 22:45:08.834673   85254 out.go:177] * [newest-cni-555648] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:45:08.836054   85254 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:45:08.836047   85254 notify.go:220] Checking for updates...
	I1011 22:45:08.838396   85254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:45:08.839544   85254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:45:08.840631   85254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:45:08.841885   85254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:45:08.843320   85254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:45:08.844737   85254 config.go:182] Loaded profile config "newest-cni-555648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:45:08.845097   85254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:45:08.845145   85254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:45:08.859833   85254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I1011 22:45:08.860301   85254 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:45:08.860864   85254 main.go:141] libmachine: Using API Version  1
	I1011 22:45:08.860884   85254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:45:08.861223   85254 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:45:08.861380   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:08.861580   85254 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:45:08.861853   85254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:45:08.861882   85254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:45:08.876785   85254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I1011 22:45:08.877222   85254 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:45:08.877721   85254 main.go:141] libmachine: Using API Version  1
	I1011 22:45:08.877744   85254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:45:08.878095   85254 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:45:08.878290   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:08.914973   85254 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:45:08.916063   85254 start.go:297] selected driver: kvm2
	I1011 22:45:08.916075   85254 start.go:901] validating driver "kvm2" against &{Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:45:08.916193   85254 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:45:08.916899   85254 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:45:08.916986   85254 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:45:08.933303   85254 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:45:08.933734   85254 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1011 22:45:08.933767   85254 cni.go:84] Creating CNI manager for ""
	I1011 22:45:08.933811   85254 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:45:08.933850   85254 start.go:340] cluster config:
	{Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:45:08.933954   85254 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:45:08.935682   85254 out.go:177] * Starting "newest-cni-555648" primary control-plane node in "newest-cni-555648" cluster
	I1011 22:45:08.937231   85254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:45:08.937266   85254 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 22:45:08.937273   85254 cache.go:56] Caching tarball of preloaded images
	I1011 22:45:08.937347   85254 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:45:08.937357   85254 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 22:45:08.937450   85254 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json ...
	I1011 22:45:08.937631   85254 start.go:360] acquireMachinesLock for newest-cni-555648: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:45:08.937688   85254 start.go:364] duration metric: took 39.62µs to acquireMachinesLock for "newest-cni-555648"
	I1011 22:45:08.937700   85254 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:45:08.937706   85254 fix.go:54] fixHost starting: 
	I1011 22:45:08.937963   85254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:45:08.937991   85254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:45:08.953545   85254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I1011 22:45:08.954004   85254 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:45:08.954520   85254 main.go:141] libmachine: Using API Version  1
	I1011 22:45:08.954538   85254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:45:08.954853   85254 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:45:08.954994   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:08.955143   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetState
	I1011 22:45:08.956695   85254 fix.go:112] recreateIfNeeded on newest-cni-555648: state=Stopped err=<nil>
	I1011 22:45:08.956717   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	W1011 22:45:08.956853   85254 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:45:08.959573   85254 out.go:177] * Restarting existing kvm2 VM for "newest-cni-555648" ...
	I1011 22:45:08.960782   85254 main.go:141] libmachine: (newest-cni-555648) Calling .Start
	I1011 22:45:08.960995   85254 main.go:141] libmachine: (newest-cni-555648) Ensuring networks are active...
	I1011 22:45:08.961794   85254 main.go:141] libmachine: (newest-cni-555648) Ensuring network default is active
	I1011 22:45:08.962094   85254 main.go:141] libmachine: (newest-cni-555648) Ensuring network mk-newest-cni-555648 is active
	I1011 22:45:08.962432   85254 main.go:141] libmachine: (newest-cni-555648) Getting domain xml...
	I1011 22:45:08.963061   85254 main.go:141] libmachine: (newest-cni-555648) Creating domain...
	I1011 22:45:10.187986   85254 main.go:141] libmachine: (newest-cni-555648) Waiting to get IP...
	I1011 22:45:10.188803   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:10.189213   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:10.189288   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:10.189202   85289 retry.go:31] will retry after 306.533149ms: waiting for machine to come up
	I1011 22:45:10.497759   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:10.498246   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:10.498276   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:10.498213   85289 retry.go:31] will retry after 248.316821ms: waiting for machine to come up
	I1011 22:45:10.747739   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:10.748255   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:10.748278   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:10.748211   85289 retry.go:31] will retry after 365.445992ms: waiting for machine to come up
	I1011 22:45:11.115686   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:11.116227   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:11.116266   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:11.116178   85289 retry.go:31] will retry after 524.932849ms: waiting for machine to come up
	I1011 22:45:11.642833   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:11.643185   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:11.643213   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:11.643146   85289 retry.go:31] will retry after 471.544691ms: waiting for machine to come up
	I1011 22:45:12.116752   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:12.117194   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:12.117212   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:12.117159   85289 retry.go:31] will retry after 579.109885ms: waiting for machine to come up
	I1011 22:45:12.697963   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:12.698367   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:12.698389   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:12.698327   85289 retry.go:31] will retry after 1.005150845s: waiting for machine to come up
	I1011 22:45:13.705362   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:13.705816   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:13.705845   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:13.705775   85289 retry.go:31] will retry after 1.397149384s: waiting for machine to come up
	I1011 22:45:15.104909   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:15.105339   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:15.105372   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:15.105315   85289 retry.go:31] will retry after 1.430294484s: waiting for machine to come up
	I1011 22:45:16.537847   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:16.538380   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:16.538412   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:16.538334   85289 retry.go:31] will retry after 2.04294482s: waiting for machine to come up
	I1011 22:45:18.583527   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:18.583972   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:18.584001   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:18.583952   85289 retry.go:31] will retry after 2.696642934s: waiting for machine to come up
	I1011 22:45:21.282373   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:21.282872   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:21.282900   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:21.282825   85289 retry.go:31] will retry after 2.678845493s: waiting for machine to come up
	I1011 22:45:23.963246   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:23.963695   85254 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:45:23.963724   85254 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:45:23.963650   85289 retry.go:31] will retry after 3.377913164s: waiting for machine to come up
	I1011 22:45:27.342644   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.343074   85254 main.go:141] libmachine: (newest-cni-555648) Found IP for machine: 192.168.50.28
	I1011 22:45:27.343101   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has current primary IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.343110   85254 main.go:141] libmachine: (newest-cni-555648) Reserving static IP address...
	I1011 22:45:27.343614   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "newest-cni-555648", mac: "52:54:00:be:f3:e1", ip: "192.168.50.28"} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.343633   85254 main.go:141] libmachine: (newest-cni-555648) DBG | skip adding static IP to network mk-newest-cni-555648 - found existing host DHCP lease matching {name: "newest-cni-555648", mac: "52:54:00:be:f3:e1", ip: "192.168.50.28"}
	I1011 22:45:27.343644   85254 main.go:141] libmachine: (newest-cni-555648) Reserved static IP address: 192.168.50.28
	I1011 22:45:27.343658   85254 main.go:141] libmachine: (newest-cni-555648) Waiting for SSH to be available...
	I1011 22:45:27.343671   85254 main.go:141] libmachine: (newest-cni-555648) DBG | Getting to WaitForSSH function...
	I1011 22:45:27.345939   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.346361   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.346392   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.346435   85254 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH client type: external
	I1011 22:45:27.346458   85254 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa (-rw-------)
	I1011 22:45:27.346482   85254 main.go:141] libmachine: (newest-cni-555648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:45:27.346503   85254 main.go:141] libmachine: (newest-cni-555648) DBG | About to run SSH command:
	I1011 22:45:27.346519   85254 main.go:141] libmachine: (newest-cni-555648) DBG | exit 0
	I1011 22:45:27.470558   85254 main.go:141] libmachine: (newest-cni-555648) DBG | SSH cmd err, output: <nil>: 
	I1011 22:45:27.470939   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:45:27.471650   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:45:27.473903   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.474272   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.474306   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.474559   85254 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json ...
	I1011 22:45:27.474793   85254 machine.go:93] provisionDockerMachine start ...
	I1011 22:45:27.474812   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:27.474984   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:27.477156   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.477460   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.477484   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.477629   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:27.477796   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:27.477929   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:27.478012   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:27.478113   85254 main.go:141] libmachine: Using SSH client type: native
	I1011 22:45:27.478282   85254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:45:27.478293   85254 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:45:27.582833   85254 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:45:27.582859   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:45:27.583066   85254 buildroot.go:166] provisioning hostname "newest-cni-555648"
	I1011 22:45:27.583093   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:45:27.583282   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:27.585769   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.586080   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.586130   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.586279   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:27.586446   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:27.586561   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:27.586701   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:27.586869   85254 main.go:141] libmachine: Using SSH client type: native
	I1011 22:45:27.587076   85254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:45:27.587088   85254 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-555648 && echo "newest-cni-555648" | sudo tee /etc/hostname
	I1011 22:45:27.704656   85254 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-555648
	
	I1011 22:45:27.704687   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:27.707203   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.707505   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.707528   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.707712   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:27.707869   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:27.708077   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:27.708214   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:27.708384   85254 main.go:141] libmachine: Using SSH client type: native
	I1011 22:45:27.708614   85254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:45:27.708644   85254 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-555648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-555648/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-555648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:45:27.821181   85254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:45:27.821210   85254 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:45:27.821233   85254 buildroot.go:174] setting up certificates
	I1011 22:45:27.821247   85254 provision.go:84] configureAuth start
	I1011 22:45:27.821269   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:45:27.821521   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:45:27.824139   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.824512   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.824550   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.824640   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:27.826888   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.827203   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:27.827230   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:27.827358   85254 provision.go:143] copyHostCerts
	I1011 22:45:27.827415   85254 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:45:27.827431   85254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:45:27.827501   85254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:45:27.827603   85254 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:45:27.827610   85254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:45:27.827634   85254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:45:27.827699   85254 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:45:27.827705   85254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:45:27.827726   85254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:45:27.827781   85254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-555648 san=[127.0.0.1 192.168.50.28 localhost minikube newest-cni-555648]
	I1011 22:45:28.068532   85254 provision.go:177] copyRemoteCerts
	I1011 22:45:28.068585   85254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:45:28.068619   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:28.071313   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.071623   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.071658   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.071796   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:28.072000   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.072180   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:28.072326   85254 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:45:28.152490   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:45:28.176651   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:45:28.200114   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:45:28.223291   85254 provision.go:87] duration metric: took 402.023947ms to configureAuth
	I1011 22:45:28.223320   85254 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:45:28.223544   85254 config.go:182] Loaded profile config "newest-cni-555648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:45:28.223627   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:28.226228   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.226547   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.226575   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.226730   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:28.226902   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.227037   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.227132   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:28.227245   85254 main.go:141] libmachine: Using SSH client type: native
	I1011 22:45:28.227417   85254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:45:28.227433   85254 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:45:28.445831   85254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:45:28.445857   85254 machine.go:96] duration metric: took 971.051804ms to provisionDockerMachine
	I1011 22:45:28.445869   85254 start.go:293] postStartSetup for "newest-cni-555648" (driver="kvm2")
	I1011 22:45:28.445879   85254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:45:28.445898   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:28.446178   85254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:45:28.446220   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:28.448770   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.449076   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.449106   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.449202   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:28.449365   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.449506   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:28.449664   85254 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:45:28.529428   85254 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:45:28.533432   85254 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:45:28.533455   85254 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:45:28.533544   85254 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:45:28.533650   85254 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:45:28.533769   85254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:45:28.543504   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:45:28.568422   85254 start.go:296] duration metric: took 122.539776ms for postStartSetup
	I1011 22:45:28.568461   85254 fix.go:56] duration metric: took 19.630755157s for fixHost
	I1011 22:45:28.568490   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:28.570984   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.571314   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.571346   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.571490   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:28.571673   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.571820   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.571940   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:28.572079   85254 main.go:141] libmachine: Using SSH client type: native
	I1011 22:45:28.572261   85254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:45:28.572279   85254 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:45:28.675268   85254 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728686728.633643455
	
	I1011 22:45:28.675293   85254 fix.go:216] guest clock: 1728686728.633643455
	I1011 22:45:28.675304   85254 fix.go:229] Guest: 2024-10-11 22:45:28.633643455 +0000 UTC Remote: 2024-10-11 22:45:28.568464765 +0000 UTC m=+19.776773560 (delta=65.17869ms)
	I1011 22:45:28.675337   85254 fix.go:200] guest clock delta is within tolerance: 65.17869ms
	I1011 22:45:28.675342   85254 start.go:83] releasing machines lock for "newest-cni-555648", held for 19.737646144s
	I1011 22:45:28.675359   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:28.675621   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:45:28.677866   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.678197   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.678225   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.678380   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:28.678850   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:28.679000   85254 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:45:28.679085   85254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:45:28.679138   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:28.679173   85254 ssh_runner.go:195] Run: cat /version.json
	I1011 22:45:28.679196   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:45:28.681668   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.681831   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.682016   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.682044   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.682200   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:28.682225   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:28.682244   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:28.682349   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.682424   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:45:28.682494   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:28.682580   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:45:28.682643   85254 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:45:28.682695   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:45:28.682837   85254 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:45:28.781992   85254 ssh_runner.go:195] Run: systemctl --version
	I1011 22:45:28.787936   85254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:45:28.932052   85254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:45:28.938121   85254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:45:28.938192   85254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:45:28.954365   85254 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:45:28.954391   85254 start.go:495] detecting cgroup driver to use...
	I1011 22:45:28.954450   85254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:45:28.970490   85254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:45:28.985198   85254 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:45:28.985252   85254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:45:28.999549   85254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:45:29.012848   85254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:45:29.127292   85254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:45:29.278397   85254 docker.go:233] disabling docker service ...
	I1011 22:45:29.278482   85254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:45:29.292713   85254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:45:29.305673   85254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:45:29.428046   85254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:45:29.555279   85254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:45:29.569267   85254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:45:29.587749   85254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:45:29.587809   85254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.598069   85254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:45:29.598126   85254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.608533   85254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.619061   85254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.629491   85254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:45:29.640113   85254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.650781   85254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.668093   85254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:45:29.678881   85254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:45:29.688557   85254 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:45:29.688632   85254 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:45:29.702407   85254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:45:29.712161   85254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:45:29.831697   85254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:45:29.923050   85254 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:45:29.923110   85254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:45:29.927905   85254 start.go:563] Will wait 60s for crictl version
	I1011 22:45:29.927951   85254 ssh_runner.go:195] Run: which crictl
	I1011 22:45:29.931538   85254 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:45:29.969986   85254 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:45:29.970058   85254 ssh_runner.go:195] Run: crio --version
	I1011 22:45:30.011806   85254 ssh_runner.go:195] Run: crio --version
	I1011 22:45:30.045018   85254 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:45:30.046564   85254 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:45:30.049203   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:30.049751   85254 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:45:30.049783   85254 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:45:30.049997   85254 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:45:30.054241   85254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:45:30.069412   85254 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1011 22:45:30.070860   85254 kubeadm.go:883] updating cluster {Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:45:30.071001   85254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:45:30.071073   85254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:45:30.112762   85254 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:45:30.112820   85254 ssh_runner.go:195] Run: which lz4
	I1011 22:45:30.116700   85254 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:45:30.120796   85254 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:45:30.120832   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:45:31.449547   85254 crio.go:462] duration metric: took 1.332876968s to copy over tarball
	I1011 22:45:31.449627   85254 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:45:33.461234   85254 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.011571516s)
	I1011 22:45:33.461265   85254 crio.go:469] duration metric: took 2.01169137s to extract the tarball
	I1011 22:45:33.461276   85254 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:45:33.497679   85254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:45:33.543083   85254 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:45:33.543112   85254 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:45:33.543122   85254 kubeadm.go:934] updating node { 192.168.50.28 8443 v1.31.1 crio true true} ...
	I1011 22:45:33.543268   85254 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-555648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:45:33.543360   85254 ssh_runner.go:195] Run: crio config
	I1011 22:45:33.591174   85254 cni.go:84] Creating CNI manager for ""
	I1011 22:45:33.591197   85254 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:45:33.591206   85254 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1011 22:45:33.591230   85254 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.28 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-555648 NodeName:newest-cni-555648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.50.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:45:33.591380   85254 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-555648"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:45:33.591446   85254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:45:33.601535   85254 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:45:33.601606   85254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:45:33.611156   85254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1011 22:45:33.628513   85254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:45:33.646483   85254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I1011 22:45:33.664737   85254 ssh_runner.go:195] Run: grep 192.168.50.28	control-plane.minikube.internal$ /etc/hosts
	I1011 22:45:33.668896   85254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:45:33.682059   85254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:45:33.812874   85254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:45:33.830171   85254 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648 for IP: 192.168.50.28
	I1011 22:45:33.830197   85254 certs.go:194] generating shared ca certs ...
	I1011 22:45:33.830222   85254 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:45:33.830399   85254 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:45:33.830463   85254 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:45:33.830477   85254 certs.go:256] generating profile certs ...
	I1011 22:45:33.830586   85254 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/client.key
	I1011 22:45:33.830706   85254 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key.83610948
	I1011 22:45:33.830768   85254 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.key
	I1011 22:45:33.830905   85254 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:45:33.830944   85254 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:45:33.830958   85254 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:45:33.830992   85254 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:45:33.831026   85254 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:45:33.831056   85254 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:45:33.831112   85254 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:45:33.831965   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:45:33.865818   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:45:33.894283   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:45:33.922052   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:45:33.959643   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:45:33.989341   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:45:34.015161   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:45:34.039244   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:45:34.068520   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:45:34.092061   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:45:34.116670   85254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:45:34.140653   85254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:45:34.157967   85254 ssh_runner.go:195] Run: openssl version
	I1011 22:45:34.164003   85254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:45:34.175353   85254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:45:34.179939   85254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:45:34.179994   85254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:45:34.185612   85254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:45:34.195847   85254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:45:34.207005   85254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:45:34.211350   85254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:45:34.211408   85254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:45:34.217603   85254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:45:34.229057   85254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:45:34.239295   85254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:45:34.243665   85254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:45:34.243713   85254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:45:34.249388   85254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:45:34.259668   85254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:45:34.264016   85254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:45:34.269830   85254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:45:34.275472   85254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:45:34.281010   85254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:45:34.286470   85254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:45:34.291913   85254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:45:34.297420   85254 kubeadm.go:392] StartCluster: {Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:45:34.297496   85254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:45:34.297551   85254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:45:34.333742   85254 cri.go:89] found id: ""
	I1011 22:45:34.333811   85254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:45:34.343684   85254 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:45:34.343706   85254 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:45:34.343752   85254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:45:34.353166   85254 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:45:34.353883   85254 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-555648" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:45:34.354199   85254 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-555648" cluster setting kubeconfig missing "newest-cni-555648" context setting]
	I1011 22:45:34.354788   85254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:45:34.356419   85254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:45:34.365200   85254 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.28
	I1011 22:45:34.365222   85254 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:45:34.365232   85254 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:45:34.365265   85254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:45:34.408527   85254 cri.go:89] found id: ""
	I1011 22:45:34.408585   85254 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:45:34.424720   85254 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:45:34.434069   85254 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:45:34.434087   85254 kubeadm.go:157] found existing configuration files:
	
	I1011 22:45:34.434132   85254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:45:34.442810   85254 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:45:34.442892   85254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:45:34.453143   85254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:45:34.462154   85254 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:45:34.462255   85254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:45:34.471629   85254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:45:34.480125   85254 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:45:34.480169   85254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:45:34.489156   85254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:45:34.497821   85254 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:45:34.497881   85254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:45:34.506871   85254 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:45:34.515785   85254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:45:34.624808   85254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:45:35.891629   85254 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.266750798s)
	I1011 22:45:35.891705   85254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:45:36.092249   85254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:45:36.170953   85254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:45:36.306112   85254 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:45:36.306210   85254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:45:36.806467   85254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:45:37.306437   85254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:45:37.806892   85254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:45:37.823385   85254 api_server.go:72] duration metric: took 1.51727429s to wait for apiserver process to appear ...
	I1011 22:45:37.823413   85254 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:45:37.823434   85254 api_server.go:253] Checking apiserver healthz at https://192.168.50.28:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.269015407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686740268994524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce916a1d-3e6f-4910-8393-e90ab6ea46df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.269726280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=578285d1-fab7-49e0-809c-c17f85a7998f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.269985111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=578285d1-fab7-49e0-809c-c17f85a7998f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.270349042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=578285d1-fab7-49e0-809c-c17f85a7998f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.317764964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=faea9b42-dc80-4c55-a72b-1a6b5f93688e name=/runtime.v1.RuntimeService/Version
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.317839478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=faea9b42-dc80-4c55-a72b-1a6b5f93688e name=/runtime.v1.RuntimeService/Version
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.318933179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fabf31a-f17a-49e1-a4c2-8b4bc395c871 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.319325848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686740319301639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fabf31a-f17a-49e1-a4c2-8b4bc395c871 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.319824404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d647e54-7533-461f-8ac3-bcceee74d7de name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.319893608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d647e54-7533-461f-8ac3-bcceee74d7de name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.320098354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d647e54-7533-461f-8ac3-bcceee74d7de name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.361985825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a04e6fb3-ae9c-41de-b6da-afa51f0652d7 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.362071468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a04e6fb3-ae9c-41de-b6da-afa51f0652d7 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.362988969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0160b16e-47c6-4884-9241-39afd01295a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.363955928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686740363924532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0160b16e-47c6-4884-9241-39afd01295a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.365403219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34b5b478-aa77-4dad-903a-10771d17377c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.365477353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34b5b478-aa77-4dad-903a-10771d17377c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.365800872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34b5b478-aa77-4dad-903a-10771d17377c name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.408699982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=838c36ae-9950-4671-a81d-aa1b5a51dd2c name=/runtime.v1.RuntimeService/Version
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.408794756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=838c36ae-9950-4671-a81d-aa1b5a51dd2c name=/runtime.v1.RuntimeService/Version
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.410800222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef2d74bf-9047-4353-b514-9b69d02f9ed6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.411368338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686740411334190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef2d74bf-9047-4353-b514-9b69d02f9ed6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.412073899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c61310d-3c55-4fe3-a3a7-6bdac0675d6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.412146715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c61310d-3c55-4fe3-a3a7-6bdac0675d6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:45:40 default-k8s-diff-port-070708 crio[715]: time="2024-10-11 22:45:40.412416269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef,PodSandboxId:6e73d99a53a989aec01ff94baba6b96c24a27b913c8ac2a8a8c78ed4318d1eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685775033944274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029fb14-2375-4536-8176-c0dcaca6319b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab,PodSandboxId:9a451a513699b492be56981a96ba34d3ae4485b5f6cbb67a1784f2cc121c5595,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774225371763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtw9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4e99be-007f-4fe6-9436-d1eaaee7ec8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073,PodSandboxId:603dad3b36fb5a658b84a40d4a0d4ff840abc5dbcd6a20367747e875d79613e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728685773927750290,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5jxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 96a6f08b-a873-4f2a-8ef1-4e573368e28e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef,PodSandboxId:d0f860f9ed0068fc49d375d8303a11118360f8eaad58220afe553e10c66344d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685774168334222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zvctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0fd5a2-533b-4b3b-8454-
0c0cc12cbdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1,PodSandboxId:67d24d1da794944c9b0da531f5f8340f9c35bbbc29f3155984ea0cacd44bcace,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172868576217440644
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4f4d96377b56a36236a8ab61a1075c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9,PodSandboxId:85663564de4b63f7cc48acbf2222b78f430cf105156e87437ca0d2c957281da6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685762143469429,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68a9ec5fcafc02efd2c11b7151e9803,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144,PodSandboxId:99d604ac06aa961c6af145d57eab09713bd6ef66f4105254ae2d4fb25c5e0e3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685762125929731,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d,PodSandboxId:ff6670dfc012a73ec6541aa33687e243d05d49bf03014bcd4b66a2463d7c2422,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685762078044416,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df50ec9fdce269fd0e8db212ffcefb4f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd,PodSandboxId:b90c8b0b224e5b5ab317d875c7bed96574cf211bd21cbfb8b1be47d6b11454d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685472163599434,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-070708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255fdaea7a0b0cc50ab1396621c50c81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c61310d-3c55-4fe3-a3a7-6bdac0675d6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2229a9f091011       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   6e73d99a53a98       storage-provisioner
	93d5a4bb1b104       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   9a451a513699b       coredns-7c65d6cfc9-gtw9g
	da6e7a92c7137       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   d0f860f9ed006       coredns-7c65d6cfc9-zvctp
	571b0b8905a01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   603dad3b36fb5       kube-proxy-f5jxp
	f483651efa722       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   67d24d1da7949       kube-scheduler-default-k8s-diff-port-070708
	bf663831c4154       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   85663564de4b6       etcd-default-k8s-diff-port-070708
	01d96e49d1dce       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   99d604ac06aa9       kube-apiserver-default-k8s-diff-port-070708
	be779ed72e098       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   ff6670dfc012a       kube-controller-manager-default-k8s-diff-port-070708
	ec5a10bd3c273       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   b90c8b0b224e5       kube-apiserver-default-k8s-diff-port-070708
	
	
	==> coredns [93d5a4bb1b1048d6cc25141890a51919f8edbfdeb919331387472d4bb75e9aab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da6e7a92c7137af7afe9a90a68605e8bdca33f32e729bbc50eb08c5634f572ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-070708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-070708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=default-k8s-diff-port-070708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-070708
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:45:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:44:57 +0000   Fri, 11 Oct 2024 22:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:44:57 +0000   Fri, 11 Oct 2024 22:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:44:57 +0000   Fri, 11 Oct 2024 22:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:44:57 +0000   Fri, 11 Oct 2024 22:29:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    default-k8s-diff-port-070708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2fe1e0181f7643ee9e66948960752b8c
	  System UUID:                2fe1e018-1f76-43ee-9e66-948960752b8c
	  Boot ID:                    c2f120d1-1329-4de0-90a6-c86e11e687ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gtw9g                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-zvctp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-070708                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-070708             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-070708    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-f5jxp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-070708             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-88h5g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-070708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-070708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-070708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-070708 event: Registered Node default-k8s-diff-port-070708 in Controller
	
	
	==> dmesg <==
	[  +0.050534] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041244] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.995654] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471180] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568262] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.765860] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.056443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060883] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.225352] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.163783] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.317884] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.121044] systemd-fstab-generator[796]: Ignoring "noauto" option for root device
	[  +1.987558] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +0.069092] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.121770] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.646545] kauditd_printk_skb: 65 callbacks suppressed
	[Oct11 22:29] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.454119] systemd-fstab-generator[2571]: Ignoring "noauto" option for root device
	[  +4.429206] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.639402] systemd-fstab-generator[2896]: Ignoring "noauto" option for root device
	[  +5.891278] systemd-fstab-generator[3033]: Ignoring "noauto" option for root device
	[  +0.121159] kauditd_printk_skb: 14 callbacks suppressed
	[Oct11 22:30] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [bf663831c415444baae1d6514feca722842800fa8b49244201850ef2491126e9] <==
	{"level":"info","ts":"2024-10-11T22:29:23.292255Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.292402Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:29:23.292660Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:23.294529Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:29:23.295230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:23.298029Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	{"level":"info","ts":"2024-10-11T22:29:23.300591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da8895e0fc3a6493","local-member-id":"95e2e907d4f1ad16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.300684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.300719Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:29:23.302836Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:29:23.305718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:39:23.421670Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2024-10-11T22:39:23.430571Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"8.569056ms","hash":3392675963,"current-db-size-bytes":2170880,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2170880,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-11T22:39:23.430627Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3392675963,"revision":688,"compact-revision":-1}
	{"level":"info","ts":"2024-10-11T22:44:23.429415Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2024-10-11T22:44:23.435775Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":931,"took":"5.384346ms","hash":2887941012,"current-db-size-bytes":2170880,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-11T22:44:23.435859Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2887941012,"revision":931,"compact-revision":688}
	{"level":"warn","ts":"2024-10-11T22:44:42.368625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.678551ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472317286627659354 > lease_revoke:<id:2d16927db367cdfe>","response":"size:29"}
	{"level":"warn","ts":"2024-10-11T22:45:36.327683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.194582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-11T22:45:36.327932Z","caller":"traceutil/trace.go:171","msg":"trace[1036725931] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1235; }","duration":"137.523453ms","start":"2024-10-11T22:45:36.190380Z","end":"2024-10-11T22:45:36.327903Z","steps":["trace[1036725931] 'count revisions from in-memory index tree'  (duration: 137.142239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-11T22:45:37.074603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.760488ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472317286627659675 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.162\" mod_revision:1228 > success:<request_put:<key:\"/registry/masterleases/192.168.39.162\" value_size:67 lease:3248945249772883865 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.162\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-11T22:45:37.074682Z","caller":"traceutil/trace.go:171","msg":"trace[1565776972] linearizableReadLoop","detail":"{readStateIndex:1445; appliedIndex:1444; }","duration":"158.193675ms","start":"2024-10-11T22:45:36.916479Z","end":"2024-10-11T22:45:37.074673Z","steps":["trace[1565776972] 'read index received'  (duration: 40.712067ms)","trace[1565776972] 'applied index is now lower than readState.Index'  (duration: 117.480687ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-11T22:45:37.074758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.276001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-11T22:45:37.074772Z","caller":"traceutil/trace.go:171","msg":"trace[2066677992] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1236; }","duration":"158.291465ms","start":"2024-10-11T22:45:36.916475Z","end":"2024-10-11T22:45:37.074767Z","steps":["trace[2066677992] 'agreement among raft nodes before linearized reading'  (duration: 158.229468ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-11T22:45:37.074848Z","caller":"traceutil/trace.go:171","msg":"trace[991952735] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"245.609486ms","start":"2024-10-11T22:45:36.829213Z","end":"2024-10-11T22:45:37.074822Z","steps":["trace[991952735] 'process raft request'  (duration: 128.047914ms)","trace[991952735] 'compare'  (duration: 116.660851ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:45:40 up 21 min,  0 users,  load average: 0.06, 0.19, 0.17
	Linux default-k8s-diff-port-070708 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01d96e49d1dceeb88d4aade96dc39ec01c8fb39ace1780c1d901971df47c3144] <==
	I1011 22:42:26.139835       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:42:26.139874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:44:25.138752       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:44:25.139044       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1011 22:44:26.141418       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:44:26.141679       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:44:26.141800       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:44:26.141875       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:44:26.142836       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:44:26.144015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:45:26.143817       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:45:26.144048       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:45:26.144238       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:45:26.144412       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:45:26.146011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:45:26.146086       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ec5a10bd3c27317f3fbe4363b8a3ceb52b5bd57ea4df04cfc5989c57523848dd] <==
	W1011 22:29:18.414388       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.440143       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.482966       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.513771       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.535574       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.574131       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.661843       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.706893       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.722583       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.735169       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.751741       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.818667       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.881094       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.881178       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.926294       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:18.974665       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.005461       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.073604       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.206269       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.214292       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.218692       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.259731       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.293055       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.459042       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:29:19.529738       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [be779ed72e098e79b069b5421df8029cd37b07892659b8551ec70a1a528dc57d] <==
	E1011 22:40:32.118382       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:40:32.707669       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:40:37.450682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="227.301µs"
	I1011 22:40:50.446926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="114.944µs"
	E1011 22:41:02.125348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:41:02.717831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:41:32.132460       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:41:32.730325       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:42:02.140306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:42:02.737355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:42:32.147286       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:42:32.745689       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:43:02.153732       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:43:02.756070       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:43:32.160408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:43:32.767648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:44:02.166382       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:44:02.777947       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:44:32.173055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:44:32.786250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:44:57.262382       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-070708"
	E1011 22:45:02.180398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:45:02.793942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:45:32.191852       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:45:32.806266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [571b0b8905a01b3e1b372d5dc1deda2665727bc74dbb6d443a3055d2ae287073] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:29:34.789830       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:29:34.821623       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E1011 22:29:34.822349       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:29:34.879079       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:29:34.879108       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:29:34.879132       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:29:34.883660       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:29:34.884447       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:29:34.884714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:29:34.889209       1 config.go:199] "Starting service config controller"
	I1011 22:29:34.889278       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:29:34.889386       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:29:34.889531       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:29:34.890475       1 config.go:328] "Starting node config controller"
	I1011 22:29:34.890573       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:29:34.989703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 22:29:34.989847       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:29:34.991217       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f483651efa722f2c9e2a97fcc5619b0deeedc26d10ef60444c0b949f5f57cad1] <==
	W1011 22:29:25.171547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:25.171583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:25.171830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:25.171870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:25.172577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:29:25.172686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.014574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:29:26.014634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.046338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:29:26.046392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.156420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:26.156607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.172806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 22:29:26.172875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:26.177038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.250894       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:29:26.251056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 22:29:26.313292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:29:26.313421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.321899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:29:26.321948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:29:26.347247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 22:29:26.347374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1011 22:29:29.449162       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:44:45 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:44:45.434268    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:44:47 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:44:47.705087    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686687704781320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:47 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:44:47.705137    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686687704781320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:56 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:44:56.432333    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:44:57 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:44:57.706238    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686697705920230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:57 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:44:57.706337    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686697705920230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:07 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:07.707654    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686707707026250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:07 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:07.707691    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686707707026250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:09 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:09.434602    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:45:17 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:17.709070    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686717708847858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:17 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:17.709116    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686717708847858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:23 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:23.431584    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:27.451714    2903 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:27.710373    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686727710165940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:27 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:27.710396    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686727710165940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:37.450014    2903 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 11 22:45:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:37.450432    2903 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 11 22:45:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:37.451198    2903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ghxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-88h5g_kube-system(d1b9fc5b-820d-4324-9883-70cb84f0044f): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 11 22:45:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:37.452569    2903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-88h5g" podUID="d1b9fc5b-820d-4324-9883-70cb84f0044f"
	Oct 11 22:45:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:37.712072    2903 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686737711836582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:45:37 default-k8s-diff-port-070708 kubelet[2903]: E1011 22:45:37.712128    2903 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686737711836582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2229a9f091011206b1ef658cfdfd86bb90c88461a9299dac844e1741211027ef] <==
	I1011 22:29:35.131578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:29:35.141149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:29:35.141222       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:29:35.154171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:29:35.156404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-070708_81dcbbba-3a73-4ebe-bb37-8a3888fb1be2!
	I1011 22:29:35.160701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b0b144f-f2cf-474f-8ace-c1a4f70bedd9", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-070708_81dcbbba-3a73-4ebe-bb37-8a3888fb1be2 became leader
	I1011 22:29:35.257277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-070708_81dcbbba-3a73-4ebe-bb37-8a3888fb1be2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-88h5g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 describe pod metrics-server-6867b74b74-88h5g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-070708 describe pod metrics-server-6867b74b74-88h5g: exit status 1 (64.764505ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-88h5g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-070708 describe pod metrics-server-6867b74b74-88h5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (416.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (304.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-390487 -n no-preload-390487
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-11 22:44:36.188260947 +0000 UTC m=+6396.202618459
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-390487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-390487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.658µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-390487 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-390487 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-390487 logs -n 25: (1.538422025s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC | 11 Oct 24 22:44 UTC |
	| start   | -p newest-cni-555648 --memory=2200 --alsologtostderr   | newest-cni-555648            | jenkins | v1.34.0 | 11 Oct 24 22:44 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:44:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:44:04.929267   84310 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:44:04.929378   84310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:44:04.929386   84310 out.go:358] Setting ErrFile to fd 2...
	I1011 22:44:04.929391   84310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:44:04.929574   84310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:44:04.930109   84310 out.go:352] Setting JSON to false
	I1011 22:44:04.931029   84310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8790,"bootTime":1728677855,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:44:04.931114   84310 start.go:139] virtualization: kvm guest
	I1011 22:44:04.933984   84310 out.go:177] * [newest-cni-555648] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:44:04.935346   84310 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:44:04.935374   84310 notify.go:220] Checking for updates...
	I1011 22:44:04.937828   84310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:44:04.938935   84310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:44:04.940245   84310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:44:04.941368   84310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:44:04.942442   84310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:44:04.943917   84310 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:04.944007   84310 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:04.944086   84310 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:04.944154   84310 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:44:04.982814   84310 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 22:44:04.983997   84310 start.go:297] selected driver: kvm2
	I1011 22:44:04.984012   84310 start.go:901] validating driver "kvm2" against <nil>
	I1011 22:44:04.984023   84310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:44:04.984705   84310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:44:04.984803   84310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:44:05.000396   84310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:44:05.000449   84310 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1011 22:44:05.000513   84310 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1011 22:44:05.000787   84310 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1011 22:44:05.000818   84310 cni.go:84] Creating CNI manager for ""
	I1011 22:44:05.000863   84310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:44:05.000871   84310 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 22:44:05.000917   84310 start.go:340] cluster config:
	{Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:44:05.001009   84310 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:44:05.002796   84310 out.go:177] * Starting "newest-cni-555648" primary control-plane node in "newest-cni-555648" cluster
	I1011 22:44:05.003904   84310 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:44:05.003948   84310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 22:44:05.003961   84310 cache.go:56] Caching tarball of preloaded images
	I1011 22:44:05.004039   84310 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:44:05.004053   84310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1011 22:44:05.004151   84310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json ...
	I1011 22:44:05.004175   84310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json: {Name:mka41f2ff10a8dbe5874167147443fa9f14151f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:44:05.004349   84310 start.go:360] acquireMachinesLock for newest-cni-555648: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:44:05.004386   84310 start.go:364] duration metric: took 21.411µs to acquireMachinesLock for "newest-cni-555648"
	I1011 22:44:05.004408   84310 start.go:93] Provisioning new machine with config: &{Name:newest-cni-555648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-555648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:44:05.004495   84310 start.go:125] createHost starting for "" (driver="kvm2")
	I1011 22:44:05.006123   84310 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 22:44:05.006273   84310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:44:05.006311   84310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:44:05.020636   84310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I1011 22:44:05.021105   84310 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:44:05.021627   84310 main.go:141] libmachine: Using API Version  1
	I1011 22:44:05.021649   84310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:44:05.022052   84310 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:44:05.022264   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:05.022447   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:05.022680   84310 start.go:159] libmachine.API.Create for "newest-cni-555648" (driver="kvm2")
	I1011 22:44:05.022710   84310 client.go:168] LocalClient.Create starting
	I1011 22:44:05.022748   84310 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem
	I1011 22:44:05.022788   84310 main.go:141] libmachine: Decoding PEM data...
	I1011 22:44:05.022809   84310 main.go:141] libmachine: Parsing certificate...
	I1011 22:44:05.022890   84310 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem
	I1011 22:44:05.022916   84310 main.go:141] libmachine: Decoding PEM data...
	I1011 22:44:05.022935   84310 main.go:141] libmachine: Parsing certificate...
	I1011 22:44:05.022958   84310 main.go:141] libmachine: Running pre-create checks...
	I1011 22:44:05.022974   84310 main.go:141] libmachine: (newest-cni-555648) Calling .PreCreateCheck
	I1011 22:44:05.023352   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:44:05.023740   84310 main.go:141] libmachine: Creating machine...
	I1011 22:44:05.023753   84310 main.go:141] libmachine: (newest-cni-555648) Calling .Create
	I1011 22:44:05.023896   84310 main.go:141] libmachine: (newest-cni-555648) Creating KVM machine...
	I1011 22:44:05.025190   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found existing default KVM network
	I1011 22:44:05.026351   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.026210   84333 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:64:26} reservation:<nil>}
	I1011 22:44:05.027494   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.027413   84333 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4900}
	I1011 22:44:05.027513   84310 main.go:141] libmachine: (newest-cni-555648) DBG | created network xml: 
	I1011 22:44:05.027524   84310 main.go:141] libmachine: (newest-cni-555648) DBG | <network>
	I1011 22:44:05.027532   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   <name>mk-newest-cni-555648</name>
	I1011 22:44:05.027541   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   <dns enable='no'/>
	I1011 22:44:05.027550   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   
	I1011 22:44:05.027563   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1011 22:44:05.027574   84310 main.go:141] libmachine: (newest-cni-555648) DBG |     <dhcp>
	I1011 22:44:05.027592   84310 main.go:141] libmachine: (newest-cni-555648) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1011 22:44:05.027606   84310 main.go:141] libmachine: (newest-cni-555648) DBG |     </dhcp>
	I1011 22:44:05.027634   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   </ip>
	I1011 22:44:05.027665   84310 main.go:141] libmachine: (newest-cni-555648) DBG |   
	I1011 22:44:05.027674   84310 main.go:141] libmachine: (newest-cni-555648) DBG | </network>
	I1011 22:44:05.027682   84310 main.go:141] libmachine: (newest-cni-555648) DBG | 
	I1011 22:44:05.033008   84310 main.go:141] libmachine: (newest-cni-555648) DBG | trying to create private KVM network mk-newest-cni-555648 192.168.50.0/24...
	I1011 22:44:05.101048   84310 main.go:141] libmachine: (newest-cni-555648) DBG | private KVM network mk-newest-cni-555648 192.168.50.0/24 created
	I1011 22:44:05.101080   84310 main.go:141] libmachine: (newest-cni-555648) Setting up store path in /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648 ...
	I1011 22:44:05.101091   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.101030   84333 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:44:05.101104   84310 main.go:141] libmachine: (newest-cni-555648) Building disk image from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 22:44:05.101239   84310 main.go:141] libmachine: (newest-cni-555648) Downloading /home/jenkins/minikube-integration/19749-11611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1011 22:44:05.351303   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.351180   84333 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa...
	I1011 22:44:05.506606   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.506497   84333 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/newest-cni-555648.rawdisk...
	I1011 22:44:05.506654   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Writing magic tar header
	I1011 22:44:05.506667   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Writing SSH key tar header
	I1011 22:44:05.506683   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:05.506644   84333 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648 ...
	I1011 22:44:05.506767   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648
	I1011 22:44:05.506811   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube/machines
	I1011 22:44:05.506828   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648 (perms=drwx------)
	I1011 22:44:05.506842   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:44:05.506857   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19749-11611
	I1011 22:44:05.506866   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1011 22:44:05.506873   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home/jenkins
	I1011 22:44:05.506880   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Checking permissions on dir: /home
	I1011 22:44:05.506888   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Skipping /home - not owner
	I1011 22:44:05.506904   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube/machines (perms=drwxr-xr-x)
	I1011 22:44:05.506912   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611/.minikube (perms=drwxr-xr-x)
	I1011 22:44:05.506919   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration/19749-11611 (perms=drwxrwxr-x)
	I1011 22:44:05.506926   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1011 22:44:05.506932   84310 main.go:141] libmachine: (newest-cni-555648) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1011 22:44:05.506939   84310 main.go:141] libmachine: (newest-cni-555648) Creating domain...
	I1011 22:44:05.507980   84310 main.go:141] libmachine: (newest-cni-555648) define libvirt domain using xml: 
	I1011 22:44:05.507999   84310 main.go:141] libmachine: (newest-cni-555648) <domain type='kvm'>
	I1011 22:44:05.508009   84310 main.go:141] libmachine: (newest-cni-555648)   <name>newest-cni-555648</name>
	I1011 22:44:05.508017   84310 main.go:141] libmachine: (newest-cni-555648)   <memory unit='MiB'>2200</memory>
	I1011 22:44:05.508031   84310 main.go:141] libmachine: (newest-cni-555648)   <vcpu>2</vcpu>
	I1011 22:44:05.508040   84310 main.go:141] libmachine: (newest-cni-555648)   <features>
	I1011 22:44:05.508063   84310 main.go:141] libmachine: (newest-cni-555648)     <acpi/>
	I1011 22:44:05.508073   84310 main.go:141] libmachine: (newest-cni-555648)     <apic/>
	I1011 22:44:05.508079   84310 main.go:141] libmachine: (newest-cni-555648)     <pae/>
	I1011 22:44:05.508085   84310 main.go:141] libmachine: (newest-cni-555648)     
	I1011 22:44:05.508091   84310 main.go:141] libmachine: (newest-cni-555648)   </features>
	I1011 22:44:05.508101   84310 main.go:141] libmachine: (newest-cni-555648)   <cpu mode='host-passthrough'>
	I1011 22:44:05.508112   84310 main.go:141] libmachine: (newest-cni-555648)   
	I1011 22:44:05.508121   84310 main.go:141] libmachine: (newest-cni-555648)   </cpu>
	I1011 22:44:05.508132   84310 main.go:141] libmachine: (newest-cni-555648)   <os>
	I1011 22:44:05.508148   84310 main.go:141] libmachine: (newest-cni-555648)     <type>hvm</type>
	I1011 22:44:05.508159   84310 main.go:141] libmachine: (newest-cni-555648)     <boot dev='cdrom'/>
	I1011 22:44:05.508168   84310 main.go:141] libmachine: (newest-cni-555648)     <boot dev='hd'/>
	I1011 22:44:05.508177   84310 main.go:141] libmachine: (newest-cni-555648)     <bootmenu enable='no'/>
	I1011 22:44:05.508184   84310 main.go:141] libmachine: (newest-cni-555648)   </os>
	I1011 22:44:05.508192   84310 main.go:141] libmachine: (newest-cni-555648)   <devices>
	I1011 22:44:05.508203   84310 main.go:141] libmachine: (newest-cni-555648)     <disk type='file' device='cdrom'>
	I1011 22:44:05.508218   84310 main.go:141] libmachine: (newest-cni-555648)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/boot2docker.iso'/>
	I1011 22:44:05.508239   84310 main.go:141] libmachine: (newest-cni-555648)       <target dev='hdc' bus='scsi'/>
	I1011 22:44:05.508250   84310 main.go:141] libmachine: (newest-cni-555648)       <readonly/>
	I1011 22:44:05.508259   84310 main.go:141] libmachine: (newest-cni-555648)     </disk>
	I1011 22:44:05.508269   84310 main.go:141] libmachine: (newest-cni-555648)     <disk type='file' device='disk'>
	I1011 22:44:05.508277   84310 main.go:141] libmachine: (newest-cni-555648)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1011 22:44:05.508290   84310 main.go:141] libmachine: (newest-cni-555648)       <source file='/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/newest-cni-555648.rawdisk'/>
	I1011 22:44:05.508306   84310 main.go:141] libmachine: (newest-cni-555648)       <target dev='hda' bus='virtio'/>
	I1011 22:44:05.508318   84310 main.go:141] libmachine: (newest-cni-555648)     </disk>
	I1011 22:44:05.508332   84310 main.go:141] libmachine: (newest-cni-555648)     <interface type='network'>
	I1011 22:44:05.508344   84310 main.go:141] libmachine: (newest-cni-555648)       <source network='mk-newest-cni-555648'/>
	I1011 22:44:05.508353   84310 main.go:141] libmachine: (newest-cni-555648)       <model type='virtio'/>
	I1011 22:44:05.508361   84310 main.go:141] libmachine: (newest-cni-555648)     </interface>
	I1011 22:44:05.508369   84310 main.go:141] libmachine: (newest-cni-555648)     <interface type='network'>
	I1011 22:44:05.508377   84310 main.go:141] libmachine: (newest-cni-555648)       <source network='default'/>
	I1011 22:44:05.508384   84310 main.go:141] libmachine: (newest-cni-555648)       <model type='virtio'/>
	I1011 22:44:05.508395   84310 main.go:141] libmachine: (newest-cni-555648)     </interface>
	I1011 22:44:05.508405   84310 main.go:141] libmachine: (newest-cni-555648)     <serial type='pty'>
	I1011 22:44:05.508431   84310 main.go:141] libmachine: (newest-cni-555648)       <target port='0'/>
	I1011 22:44:05.508454   84310 main.go:141] libmachine: (newest-cni-555648)     </serial>
	I1011 22:44:05.508472   84310 main.go:141] libmachine: (newest-cni-555648)     <console type='pty'>
	I1011 22:44:05.508487   84310 main.go:141] libmachine: (newest-cni-555648)       <target type='serial' port='0'/>
	I1011 22:44:05.508498   84310 main.go:141] libmachine: (newest-cni-555648)     </console>
	I1011 22:44:05.508508   84310 main.go:141] libmachine: (newest-cni-555648)     <rng model='virtio'>
	I1011 22:44:05.508521   84310 main.go:141] libmachine: (newest-cni-555648)       <backend model='random'>/dev/random</backend>
	I1011 22:44:05.508531   84310 main.go:141] libmachine: (newest-cni-555648)     </rng>
	I1011 22:44:05.508546   84310 main.go:141] libmachine: (newest-cni-555648)     
	I1011 22:44:05.508562   84310 main.go:141] libmachine: (newest-cni-555648)     
	I1011 22:44:05.508581   84310 main.go:141] libmachine: (newest-cni-555648)   </devices>
	I1011 22:44:05.508594   84310 main.go:141] libmachine: (newest-cni-555648) </domain>
	I1011 22:44:05.508608   84310 main.go:141] libmachine: (newest-cni-555648) 
	I1011 22:44:05.512354   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:e5:b8:5b in network default
	I1011 22:44:05.512926   84310 main.go:141] libmachine: (newest-cni-555648) Ensuring networks are active...
	I1011 22:44:05.512951   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:05.513618   84310 main.go:141] libmachine: (newest-cni-555648) Ensuring network default is active
	I1011 22:44:05.513927   84310 main.go:141] libmachine: (newest-cni-555648) Ensuring network mk-newest-cni-555648 is active
	I1011 22:44:05.514434   84310 main.go:141] libmachine: (newest-cni-555648) Getting domain xml...
	I1011 22:44:05.515192   84310 main.go:141] libmachine: (newest-cni-555648) Creating domain...
	I1011 22:44:06.765116   84310 main.go:141] libmachine: (newest-cni-555648) Waiting to get IP...
	I1011 22:44:06.765902   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:06.766355   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:06.766411   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:06.766353   84333 retry.go:31] will retry after 282.599684ms: waiting for machine to come up
	I1011 22:44:07.050927   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:07.051353   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:07.051382   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:07.051307   84333 retry.go:31] will retry after 283.892428ms: waiting for machine to come up
	I1011 22:44:07.336751   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:07.337244   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:07.337271   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:07.337189   84333 retry.go:31] will retry after 408.901556ms: waiting for machine to come up
	I1011 22:44:07.747499   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:07.747990   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:07.748035   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:07.747943   84333 retry.go:31] will retry after 383.080413ms: waiting for machine to come up
	I1011 22:44:08.132453   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:08.132900   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:08.132930   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:08.132832   84333 retry.go:31] will retry after 544.978224ms: waiting for machine to come up
	I1011 22:44:08.679476   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:08.679909   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:08.679931   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:08.679861   84333 retry.go:31] will retry after 809.318003ms: waiting for machine to come up
	I1011 22:44:09.490794   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:09.491432   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:09.491465   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:09.491352   84333 retry.go:31] will retry after 928.395613ms: waiting for machine to come up
	I1011 22:44:10.421620   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:10.422115   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:10.422146   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:10.422098   84333 retry.go:31] will retry after 1.418741116s: waiting for machine to come up
	I1011 22:44:11.842596   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:11.843004   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:11.843033   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:11.842965   84333 retry.go:31] will retry after 1.854251254s: waiting for machine to come up
	I1011 22:44:13.699805   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:13.700297   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:13.700323   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:13.700251   84333 retry.go:31] will retry after 1.878810401s: waiting for machine to come up
	I1011 22:44:15.580873   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:15.581354   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:15.581401   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:15.581305   84333 retry.go:31] will retry after 2.423754064s: waiting for machine to come up
	I1011 22:44:18.006085   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:18.006507   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:18.006527   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:18.006471   84333 retry.go:31] will retry after 2.377932527s: waiting for machine to come up
	I1011 22:44:20.386296   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:20.386704   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:20.386733   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:20.386670   84333 retry.go:31] will retry after 4.448322326s: waiting for machine to come up
	I1011 22:44:24.840159   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:24.840484   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find current IP address of domain newest-cni-555648 in network mk-newest-cni-555648
	I1011 22:44:24.840510   84310 main.go:141] libmachine: (newest-cni-555648) DBG | I1011 22:44:24.840447   84333 retry.go:31] will retry after 5.403094469s: waiting for machine to come up
	I1011 22:44:30.244569   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:30.245038   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has current primary IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:30.245063   84310 main.go:141] libmachine: (newest-cni-555648) Found IP for machine: 192.168.50.28
	I1011 22:44:30.245072   84310 main.go:141] libmachine: (newest-cni-555648) Reserving static IP address...
	I1011 22:44:30.245457   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find host DHCP lease matching {name: "newest-cni-555648", mac: "52:54:00:be:f3:e1", ip: "192.168.50.28"} in network mk-newest-cni-555648
	I1011 22:44:30.320948   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Getting to WaitForSSH function...
	I1011 22:44:30.320980   84310 main.go:141] libmachine: (newest-cni-555648) Reserved static IP address: 192.168.50.28
	I1011 22:44:30.321056   84310 main.go:141] libmachine: (newest-cni-555648) Waiting for SSH to be available...
	I1011 22:44:30.324115   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:30.324481   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648
	I1011 22:44:30.324503   84310 main.go:141] libmachine: (newest-cni-555648) DBG | unable to find defined IP address of network mk-newest-cni-555648 interface with MAC address 52:54:00:be:f3:e1
	I1011 22:44:30.324587   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH client type: external
	I1011 22:44:30.324605   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa (-rw-------)
	I1011 22:44:30.324652   84310 main.go:141] libmachine: (newest-cni-555648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:44:30.324664   84310 main.go:141] libmachine: (newest-cni-555648) DBG | About to run SSH command:
	I1011 22:44:30.324676   84310 main.go:141] libmachine: (newest-cni-555648) DBG | exit 0
	I1011 22:44:30.328587   84310 main.go:141] libmachine: (newest-cni-555648) DBG | SSH cmd err, output: exit status 255: 
	I1011 22:44:30.328611   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1011 22:44:30.328622   84310 main.go:141] libmachine: (newest-cni-555648) DBG | command : exit 0
	I1011 22:44:30.328629   84310 main.go:141] libmachine: (newest-cni-555648) DBG | err     : exit status 255
	I1011 22:44:30.328639   84310 main.go:141] libmachine: (newest-cni-555648) DBG | output  : 
	I1011 22:44:33.329043   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Getting to WaitForSSH function...
	I1011 22:44:33.331662   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.331967   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.332002   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.332116   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH client type: external
	I1011 22:44:33.332141   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa (-rw-------)
	I1011 22:44:33.332194   84310 main.go:141] libmachine: (newest-cni-555648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:44:33.332218   84310 main.go:141] libmachine: (newest-cni-555648) DBG | About to run SSH command:
	I1011 22:44:33.332237   84310 main.go:141] libmachine: (newest-cni-555648) DBG | exit 0
	I1011 22:44:33.458496   84310 main.go:141] libmachine: (newest-cni-555648) DBG | SSH cmd err, output: <nil>: 
	I1011 22:44:33.458819   84310 main.go:141] libmachine: (newest-cni-555648) KVM machine creation complete!
	I1011 22:44:33.459248   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:44:33.459737   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:33.459910   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:33.460038   84310 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1011 22:44:33.460055   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetState
	I1011 22:44:33.461425   84310 main.go:141] libmachine: Detecting operating system of created instance...
	I1011 22:44:33.461439   84310 main.go:141] libmachine: Waiting for SSH to be available...
	I1011 22:44:33.461444   84310 main.go:141] libmachine: Getting to WaitForSSH function...
	I1011 22:44:33.461449   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.463234   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.463575   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.463612   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.463742   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.463917   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.464052   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.464159   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.464337   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.464523   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.464532   84310 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1011 22:44:33.578109   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:44:33.578138   84310 main.go:141] libmachine: Detecting the provisioner...
	I1011 22:44:33.578149   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.580934   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.581401   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.581441   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.581565   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.581740   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.581897   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.582025   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.582176   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.582341   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.582352   84310 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1011 22:44:33.692103   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1011 22:44:33.692201   84310 main.go:141] libmachine: found compatible host: buildroot
	I1011 22:44:33.692214   84310 main.go:141] libmachine: Provisioning with buildroot...
	I1011 22:44:33.692231   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:33.692519   84310 buildroot.go:166] provisioning hostname "newest-cni-555648"
	I1011 22:44:33.692551   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:33.692739   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.695130   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.695430   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.695462   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.695568   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.695731   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.695855   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.695977   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.696117   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.696334   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.696351   84310 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-555648 && echo "newest-cni-555648" | sudo tee /etc/hostname
	I1011 22:44:33.821605   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-555648
	
	I1011 22:44:33.821636   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.824568   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.824961   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.824991   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.825137   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:33.825311   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.825461   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:33.825581   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:33.825743   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:33.825960   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:33.825978   84310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-555648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-555648/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-555648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:44:33.944292   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:44:33.944318   84310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:44:33.944354   84310 buildroot.go:174] setting up certificates
	I1011 22:44:33.944367   84310 provision.go:84] configureAuth start
	I1011 22:44:33.944381   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetMachineName
	I1011 22:44:33.944641   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:33.947604   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.947993   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.948095   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.948184   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:33.950745   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.951151   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:33.951169   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:33.951452   84310 provision.go:143] copyHostCerts
	I1011 22:44:33.951526   84310 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:44:33.951551   84310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:44:33.951649   84310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:44:33.951787   84310 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:44:33.951802   84310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:44:33.951842   84310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:44:33.951934   84310 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:44:33.951944   84310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:44:33.951976   84310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:44:33.952057   84310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-555648 san=[127.0.0.1 192.168.50.28 localhost minikube newest-cni-555648]
	I1011 22:44:34.084547   84310 provision.go:177] copyRemoteCerts
	I1011 22:44:34.084611   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:44:34.084634   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.087660   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.087932   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.087957   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.088144   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.088329   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.088468   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.088592   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.172724   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:44:34.196727   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:44:34.220574   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:44:34.244872   84310 provision.go:87] duration metric: took 300.491073ms to configureAuth
	I1011 22:44:34.244907   84310 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:44:34.245132   84310 config.go:182] Loaded profile config "newest-cni-555648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:44:34.245241   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.247929   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.248260   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.248292   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.248482   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.248674   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.248847   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.248987   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.249145   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:34.249327   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:34.249347   84310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:44:34.477556   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:44:34.477588   84310 main.go:141] libmachine: Checking connection to Docker...
	I1011 22:44:34.477600   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetURL
	I1011 22:44:34.478982   84310 main.go:141] libmachine: (newest-cni-555648) DBG | Using libvirt version 6000000
	I1011 22:44:34.481138   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.481420   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.481460   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.481573   84310 main.go:141] libmachine: Docker is up and running!
	I1011 22:44:34.481586   84310 main.go:141] libmachine: Reticulating splines...
	I1011 22:44:34.481594   84310 client.go:171] duration metric: took 29.458877128s to LocalClient.Create
	I1011 22:44:34.481628   84310 start.go:167] duration metric: took 29.458950635s to libmachine.API.Create "newest-cni-555648"
	I1011 22:44:34.481637   84310 start.go:293] postStartSetup for "newest-cni-555648" (driver="kvm2")
	I1011 22:44:34.481650   84310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:44:34.481665   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.481876   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:44:34.481899   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.483765   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.484017   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.484040   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.484229   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.484395   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.484580   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.484724   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.569527   84310 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:44:34.573502   84310 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:44:34.573523   84310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:44:34.573591   84310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:44:34.573681   84310 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:44:34.573792   84310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:44:34.583748   84310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:44:34.607678   84310 start.go:296] duration metric: took 126.024827ms for postStartSetup
	I1011 22:44:34.607727   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetConfigRaw
	I1011 22:44:34.608341   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:34.610962   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.611299   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.611344   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.611575   84310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/newest-cni-555648/config.json ...
	I1011 22:44:34.611749   84310 start.go:128] duration metric: took 29.607243974s to createHost
	I1011 22:44:34.611771   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.613859   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.614140   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.614165   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.614288   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.614460   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.614603   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.614745   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.614871   84310 main.go:141] libmachine: Using SSH client type: native
	I1011 22:44:34.615021   84310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.28 22 <nil> <nil>}
	I1011 22:44:34.615031   84310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:44:34.723199   84310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728686674.705066502
	
	I1011 22:44:34.723223   84310 fix.go:216] guest clock: 1728686674.705066502
	I1011 22:44:34.723233   84310 fix.go:229] Guest: 2024-10-11 22:44:34.705066502 +0000 UTC Remote: 2024-10-11 22:44:34.611760965 +0000 UTC m=+29.722042852 (delta=93.305537ms)
	I1011 22:44:34.723256   84310 fix.go:200] guest clock delta is within tolerance: 93.305537ms
	I1011 22:44:34.723262   84310 start.go:83] releasing machines lock for "newest-cni-555648", held for 29.718864916s
	I1011 22:44:34.723288   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.723537   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:34.726079   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.726429   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.726456   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.726645   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.727101   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.727305   84310 main.go:141] libmachine: (newest-cni-555648) Calling .DriverName
	I1011 22:44:34.727404   84310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:44:34.727456   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.727497   84310 ssh_runner.go:195] Run: cat /version.json
	I1011 22:44:34.727532   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHHostname
	I1011 22:44:34.730097   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730256   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730450   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.730495   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730667   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.730696   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:34.730743   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:34.730813   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.730888   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHPort
	I1011 22:44:34.730968   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.731021   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHKeyPath
	I1011 22:44:34.731090   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.731164   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetSSHUsername
	I1011 22:44:34.731289   84310 sshutil.go:53] new ssh client: &{IP:192.168.50.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/newest-cni-555648/id_rsa Username:docker}
	I1011 22:44:34.833089   84310 ssh_runner.go:195] Run: systemctl --version
	I1011 22:44:34.840199   84310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:44:35.002779   84310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:44:35.008829   84310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:44:35.008896   84310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:44:35.025730   84310 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:44:35.025754   84310 start.go:495] detecting cgroup driver to use...
	I1011 22:44:35.025807   84310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:44:35.043855   84310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:44:35.060827   84310 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:44:35.060893   84310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:44:35.077741   84310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:44:35.093355   84310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:44:35.220491   84310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:44:35.375832   84310 docker.go:233] disabling docker service ...
	I1011 22:44:35.375936   84310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:44:35.390879   84310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:44:35.403628   84310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:44:35.524431   84310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:44:35.652171   84310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:44:35.667985   84310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:44:35.688250   84310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:44:35.688301   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.699887   84310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:44:35.699968   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.710522   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.721077   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.731886   84310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:44:35.742574   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.753279   84310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.771374   84310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:44:35.782331   84310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:44:35.792028   84310 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:44:35.792091   84310 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:44:35.807455   84310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:44:35.817383   84310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:44:35.940408   84310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:44:36.053490   84310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:44:36.053564   84310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:44:36.059204   84310 start.go:563] Will wait 60s for crictl version
	I1011 22:44:36.059287   84310 ssh_runner.go:195] Run: which crictl
	I1011 22:44:36.063532   84310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:44:36.110367   84310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:44:36.110444   84310 ssh_runner.go:195] Run: crio --version
	I1011 22:44:36.142268   84310 ssh_runner.go:195] Run: crio --version
	I1011 22:44:36.180074   84310 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:44:36.181627   84310 main.go:141] libmachine: (newest-cni-555648) Calling .GetIP
	I1011 22:44:36.184426   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:36.184717   84310 main.go:141] libmachine: (newest-cni-555648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:f3:e1", ip: ""} in network mk-newest-cni-555648: {Iface:virbr3 ExpiryTime:2024-10-11 23:44:20 +0000 UTC Type:0 Mac:52:54:00:be:f3:e1 Iaid: IPaddr:192.168.50.28 Prefix:24 Hostname:newest-cni-555648 Clientid:01:52:54:00:be:f3:e1}
	I1011 22:44:36.184742   84310 main.go:141] libmachine: (newest-cni-555648) DBG | domain newest-cni-555648 has defined IP address 192.168.50.28 and MAC address 52:54:00:be:f3:e1 in network mk-newest-cni-555648
	I1011 22:44:36.184965   84310 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:44:36.190248   84310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:44:36.208226   84310 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.929535060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686676929514477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35c515ee-4566-4055-9e59-1744e0d10031 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.930107365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a466ff3a-a1c7-4be2-9ef3-fe34b8a3f82e name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.930209493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a466ff3a-a1c7-4be2-9ef3-fe34b8a3f82e name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.931364355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a466ff3a-a1c7-4be2-9ef3-fe34b8a3f82e name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.987935323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44d690af-4385-43e3-9075-c592d2c0d994 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.988057134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44d690af-4385-43e3-9075-c592d2c0d994 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.989303731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee0c3e33-dbd4-4885-9576-7d9c7cd03b41 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.989991019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686676989733259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee0c3e33-dbd4-4885-9576-7d9c7cd03b41 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.990646650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8857fb4c-8c18-4e13-a25c-13a75609cb19 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.990730217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8857fb4c-8c18-4e13-a25c-13a75609cb19 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:36 no-preload-390487 crio[713]: time="2024-10-11 22:44:36.991097423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8857fb4c-8c18-4e13-a25c-13a75609cb19 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.044056413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1bd2ef7d-fcb0-4c52-ae88-af81e70e06c3 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.044158649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1bd2ef7d-fcb0-4c52-ae88-af81e70e06c3 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.046066471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85efee1b-6d74-4b16-9b91-145e4fe0d52e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.046524110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686677046498390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85efee1b-6d74-4b16-9b91-145e4fe0d52e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.047391349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f9d1638-3246-4893-9ede-26da9f7c578b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.047686289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f9d1638-3246-4893-9ede-26da9f7c578b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.048218909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f9d1638-3246-4893-9ede-26da9f7c578b name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.093733738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5231bcd6-b6c1-4914-babd-6231773dd1d4 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.093921732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5231bcd6-b6c1-4914-babd-6231773dd1d4 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.096027433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1c24cfb-a775-449a-a5a6-4cde5e0ad654 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.096508378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686677096476233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1c24cfb-a775-449a-a5a6-4cde5e0ad654 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.097175751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c517f8e-2ed4-4c48-944f-c461dabf83ee name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.097246417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c517f8e-2ed4-4c48-944f-c461dabf83ee name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:37 no-preload-390487 crio[713]: time="2024-10-11 22:44:37.097531863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5,PodSandboxId:b8e3a7b6dbfdca05c5d8e5dcb0b861e939559e4085ce417cb42f7e8dabc164d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728685823164535251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f955c1-7782-4612-92cd-483ddc048439,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4,PodSandboxId:90b87c4142d01805274ca948753c2dc402a75990fcb3f4bb302f35991728612a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822744002012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cpdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd94e043-da2c-49c5-84df-2ab683ebdc37,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30,PodSandboxId:c18a08a146672ccfbcf340915319793cb7fadf3bb16d056dc0c9802770054c4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728685822643644223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swwtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00
984077-22c9-4c6c-a0f0-84e3a460b2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37,PodSandboxId:52c2634071218319da631341e6078ce5d129453700a8d68c872d58c7820fec00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728685821921373552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g8nw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d50e6c35-accf-4fbd-9f76-d7621d382fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27,PodSandboxId:e89f75e2f30f84a2f2096e42111252be6117b99c19bde5e481f17047039cd314,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728685810846406728,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0b87e9c061cfedd82c3ac79f69a62d0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f,PodSandboxId:ec3f725c59f839dcc07a34d439d55653e5934d882bd07b99948f3196cc59da98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728685810837196551,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432986cadaeb04308a3d8728566735c2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209,PodSandboxId:b72873bae5d0fdf7b5c4c7167506c8bea0e46ad733ae2a7b41f46068e9196a03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728685810884353411,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5fa512683d583d3f8bf8b770b19c3f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc,PodSandboxId:c30579031e2f226147156b25a18f3581cb869a2083eeee3d30f953c394bac0bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728685810806857749,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591,PodSandboxId:2c36b8febf83ddd51de0c56ded58b1740dad89d18190b7dc3f64f9ee1cac39c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728685524961919340,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-390487,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f87acdf69d2827e9ada315c3e408325,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c517f8e-2ed4-4c48-944f-c461dabf83ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6096cd67128b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   b8e3a7b6dbfdc       storage-provisioner
	08818b22b7103       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   90b87c4142d01       coredns-7c65d6cfc9-cpdng
	32b14e6aa5326       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   c18a08a146672       coredns-7c65d6cfc9-swwtf
	ebad1fa4ce2cd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   52c2634071218       kube-proxy-4g8nw
	d68b4e0d1d7ae       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   2                   b72873bae5d0f       kube-controller-manager-no-preload-390487
	f8924a587a9eb       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   e89f75e2f30f8       kube-scheduler-no-preload-390487
	8293e0fb6f1b0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   ec3f725c59f83       etcd-no-preload-390487
	027364df9cdb4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Running             kube-apiserver            2                   c30579031e2f2       kube-apiserver-no-preload-390487
	358e33d06a269       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   2c36b8febf83d       kube-apiserver-no-preload-390487
	
	
	==> coredns [08818b22b7103835a29af444b58e013b9f89b0947b6634414e61d3c9da7494c4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [32b14e6aa5326e84fb994eee9dadddae81b6e5ba5097273a554009a5ab6fee30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-390487
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-390487
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=no-preload-390487
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:30:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-390487
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:44:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:40:37 +0000   Fri, 11 Oct 2024 22:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:40:37 +0000   Fri, 11 Oct 2024 22:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:40:37 +0000   Fri, 11 Oct 2024 22:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:40:37 +0000   Fri, 11 Oct 2024 22:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.55
	  Hostname:    no-preload-390487
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e2a509ed2ba444bb34648704e214638
	  System UUID:                9e2a509e-d2ba-444b-b346-48704e214638
	  Boot ID:                    14dc90eb-55c0-46fe-a428-0609dc730585
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-cpdng                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-swwtf                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-390487                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-390487             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-390487    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-4g8nw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-390487             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-26g42              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-390487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-390487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-390487 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-390487 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-390487 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-390487 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-390487 event: Registered Node no-preload-390487 in Controller
	
	
	==> dmesg <==
	[  +0.055612] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.231389] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.617200] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600324] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct11 22:25] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.056501] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063601] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.224336] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.136968] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.295648] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.392621] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.064861] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.990524] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +3.421036] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.239893] kauditd_printk_skb: 86 callbacks suppressed
	[Oct11 22:30] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.518058] systemd-fstab-generator[3084]: Ignoring "noauto" option for root device
	[  +4.425541] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.652465] systemd-fstab-generator[3409]: Ignoring "noauto" option for root device
	[  +5.393908] systemd-fstab-generator[3541]: Ignoring "noauto" option for root device
	[  +0.122289] kauditd_printk_skb: 14 callbacks suppressed
	[Oct11 22:31] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8293e0fb6f1b00980152f8f485e964a048b4ea58e044cfe32200fd2ec192836f] <==
	{"level":"info","ts":"2024-10-11T22:30:11.267510Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.55:2380"}
	{"level":"info","ts":"2024-10-11T22:30:11.267543Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.55:2380"}
	{"level":"info","ts":"2024-10-11T22:30:11.510898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-11T22:30:11.511039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-11T22:30:11.511156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f received MsgPreVoteResp from 34f12fd29de7a73f at term 1"}
	{"level":"info","ts":"2024-10-11T22:30:11.511290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f became candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.511389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f received MsgVoteResp from 34f12fd29de7a73f at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.511423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"34f12fd29de7a73f became leader at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.511506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 34f12fd29de7a73f elected leader 34f12fd29de7a73f at term 2"}
	{"level":"info","ts":"2024-10-11T22:30:11.515990Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"34f12fd29de7a73f","local-member-attributes":"{Name:no-preload-390487 ClientURLs:[https://192.168.61.55:2379]}","request-path":"/0/members/34f12fd29de7a73f/attributes","cluster-id":"d57be02f73e7047c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:30:11.516260Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.516786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:30:11.517155Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:30:11.525964Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:30:11.526711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:30:11.532327Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d57be02f73e7047c","local-member-id":"34f12fd29de7a73f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.536950Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.529416Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T22:30:11.532803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:30:11.537864Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:30:11.537928Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:30:11.541955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.55:2379"}
	{"level":"info","ts":"2024-10-11T22:40:11.727589Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":718}
	{"level":"info","ts":"2024-10-11T22:40:11.735979Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":718,"took":"7.953659ms","hash":4007623269,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-10-11T22:40:11.736056Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4007623269,"revision":718,"compact-revision":-1}
	
	
	==> kernel <==
	 22:44:37 up 19 min,  0 users,  load average: 0.12, 0.09, 0.09
	Linux no-preload-390487 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [027364df9cdb430a087f5321ecfb181262ad370599a86752308a3bacf328f8cc] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1011 22:40:14.614871       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:40:14.614956       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1011 22:40:14.616044       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:40:14.616090       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:41:14.616879       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:41:14.617211       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1011 22:41:14.617104       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:41:14.617320       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1011 22:41:14.618403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:41:14.618551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1011 22:43:14.619410       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:43:14.619507       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1011 22:43:14.619612       1 handler_proxy.go:99] no RequestInfo found in the context
	E1011 22:43:14.619687       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1011 22:43:14.620653       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 22:43:14.620723       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [358e33d06a269d62ef2c4cf3846da60e55469b9bdc1f57985ee0f2e49dc3b591] <==
	W1011 22:30:04.948547       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:04.960827       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.015560       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.050642       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.052119       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.057552       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.072173       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.072204       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.184930       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.186464       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.200223       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.227475       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.280263       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.341372       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.351045       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.395901       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.410188       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.410410       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.441117       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.476677       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.481609       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.493282       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.589846       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.697489       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 22:30:05.742145       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d68b4e0d1d7ae8a1f1488f913b449086a771f259f904d7cef265103df1f02209] <==
	E1011 22:39:20.511493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:39:21.094821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:39:50.518593       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:39:51.102346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:40:20.525147       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:40:21.112859       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:40:37.882483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-390487"
	E1011 22:40:50.533418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:40:51.120930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:41:20.539658       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:41:21.128580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1011 22:41:34.237169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="158.145µs"
	I1011 22:41:49.237033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="333.977µs"
	E1011 22:41:50.546056       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:41:51.138407       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:42:20.553586       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:42:21.151242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:42:50.560152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:42:51.158661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:43:20.566930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:43:21.166453       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:43:50.573476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:43:51.174140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1011 22:44:20.580356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1011 22:44:21.187422       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ebad1fa4ce2cd8bbdae9db4ec5bbbc50396bd39a6304336cafc09d8bca386e37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 22:30:22.260844       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 22:30:22.275717       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.55"]
	E1011 22:30:22.275830       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 22:30:22.478822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 22:30:22.478871       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 22:30:22.478916       1 server_linux.go:169] "Using iptables Proxier"
	I1011 22:30:22.481830       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 22:30:22.482053       1 server.go:483] "Version info" version="v1.31.1"
	I1011 22:30:22.482064       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:30:22.486716       1 config.go:199] "Starting service config controller"
	I1011 22:30:22.486798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 22:30:22.486824       1 config.go:105] "Starting endpoint slice config controller"
	I1011 22:30:22.486828       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 22:30:22.487176       1 config.go:328] "Starting node config controller"
	I1011 22:30:22.487283       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 22:30:22.587474       1 shared_informer.go:320] Caches are synced for node config
	I1011 22:30:22.587501       1 shared_informer.go:320] Caches are synced for service config
	I1011 22:30:22.587520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f8924a587a9ebc2651a26490bffa92844b5fba866d8b5c6b7170bd5bffb05b27] <==
	W1011 22:30:13.622025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:30:13.622062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.504414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 22:30:14.504448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.507036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 22:30:14.507099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.601046       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:30:14.601237       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1011 22:30:14.701021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:30:14.701076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.716948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 22:30:14.717003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.805940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:30:14.806128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.810588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 22:30:14.810703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.823575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 22:30:14.823838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.825645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 22:30:14.825726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.842318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:30:14.842548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 22:30:14.897665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:30:14.897870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1011 22:30:16.501367       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 22:43:26 no-preload-390487 kubelet[3415]: E1011 22:43:26.444190    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686606443172574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:33 no-preload-390487 kubelet[3415]: E1011 22:43:33.219391    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:43:36 no-preload-390487 kubelet[3415]: E1011 22:43:36.445854    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686616445370710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:36 no-preload-390487 kubelet[3415]: E1011 22:43:36.446218    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686616445370710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:46 no-preload-390487 kubelet[3415]: E1011 22:43:46.220635    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:43:46 no-preload-390487 kubelet[3415]: E1011 22:43:46.447495    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686626447301284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:46 no-preload-390487 kubelet[3415]: E1011 22:43:46.447544    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686626447301284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:56 no-preload-390487 kubelet[3415]: E1011 22:43:56.449301    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686636448932996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:56 no-preload-390487 kubelet[3415]: E1011 22:43:56.449597    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686636448932996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:43:58 no-preload-390487 kubelet[3415]: E1011 22:43:58.218728    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:44:06 no-preload-390487 kubelet[3415]: E1011 22:44:06.455870    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686646454516313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:06 no-preload-390487 kubelet[3415]: E1011 22:44:06.455963    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686646454516313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:11 no-preload-390487 kubelet[3415]: E1011 22:44:11.220111    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]: E1011 22:44:16.285551    3415 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]: E1011 22:44:16.456807    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686656456537397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:16 no-preload-390487 kubelet[3415]: E1011 22:44:16.456851    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686656456537397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:26 no-preload-390487 kubelet[3415]: E1011 22:44:26.221048    3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26g42" podUID="faa0e007-ef61-4c3a-813e-4cea5052c564"
	Oct 11 22:44:26 no-preload-390487 kubelet[3415]: E1011 22:44:26.457963    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686666457665635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:26 no-preload-390487 kubelet[3415]: E1011 22:44:26.458011    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686666457665635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:36 no-preload-390487 kubelet[3415]: E1011 22:44:36.459557    3415 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686676459236085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 11 22:44:36 no-preload-390487 kubelet[3415]: E1011 22:44:36.459600    3415 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686676459236085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b6096cd67128bdb843c24cd452596adfdc093c165b9d6b76efe66004fae665c5] <==
	I1011 22:30:23.268995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:30:23.296576       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:30:23.296635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:30:23.305119       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:30:23.306946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-390487_e147461c-90ee-47d9-a237-d5e1a6e23ff5!
	I1011 22:30:23.307921       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0713c922-9daa-49bc-83dd-f068c0a969c9", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-390487_e147461c-90ee-47d9-a237-d5e1a6e23ff5 became leader
	I1011 22:30:23.408096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-390487_e147461c-90ee-47d9-a237-d5e1a6e23ff5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-390487 -n no-preload-390487
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-390487 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-26g42
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-390487 describe pod metrics-server-6867b74b74-26g42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-390487 describe pod metrics-server-6867b74b74-26g42: exit status 1 (67.254141ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-26g42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-390487 describe pod metrics-server-6867b74b74-26g42: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (304.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:42:06.382854   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:43:10.343518   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/calico-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:43:27.562669   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
E1011 22:43:33.029233   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.223:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.223:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (239.010711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-323416" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-323416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-323416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.951µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-323416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (223.166605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-323416 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-323416 logs -n 25: (1.451477157s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-579309 sudo cat                              | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo                                  | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo find                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-579309 sudo crio                             | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-579309                                       | bridge-579309                | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-590493 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:15 UTC |
	|         | disable-driver-mounts-590493                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:15 UTC | 11 Oct 24 22:17 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-390487             | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223942            | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC | 11 Oct 24 22:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-070708  | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC | 11 Oct 24 22:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:17 UTC |                     |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-323416        | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-390487                  | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-390487                                   | no-preload-390487            | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223942                 | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223942                                  | embed-certs-223942           | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-070708       | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-070708 | jenkins | v1.34.0 | 11 Oct 24 22:19 UTC | 11 Oct 24 22:29 UTC |
	|         | default-k8s-diff-port-070708                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-323416             | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC | 11 Oct 24 22:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-323416                              | old-k8s-version-323416       | jenkins | v1.34.0 | 11 Oct 24 22:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 22:20:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 22:20:37.931908   78126 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:20:37.932013   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932021   78126 out.go:358] Setting ErrFile to fd 2...
	I1011 22:20:37.932026   78126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:20:37.932189   78126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:20:37.932671   78126 out.go:352] Setting JSON to false
	I1011 22:20:37.933524   78126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7383,"bootTime":1728677855,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:20:37.933612   78126 start.go:139] virtualization: kvm guest
	I1011 22:20:37.935895   78126 out.go:177] * [old-k8s-version-323416] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:20:37.937240   78126 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:20:37.937264   78126 notify.go:220] Checking for updates...
	I1011 22:20:37.939707   78126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:20:37.940957   78126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:20:37.942168   78126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:20:37.943261   78126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:20:37.944499   78126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:20:37.946000   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:20:37.946358   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.946394   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.960896   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1011 22:20:37.961275   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.961828   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.961856   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.962156   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.962317   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:37.964012   78126 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 22:20:37.965157   78126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:20:37.965486   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:20:37.965521   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:20:37.979745   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1011 22:20:37.980212   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:20:37.980638   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:20:37.980660   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:20:37.980987   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:20:37.981195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:20:38.014271   78126 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 22:20:38.015429   78126 start.go:297] selected driver: kvm2
	I1011 22:20:38.015442   78126 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.015581   78126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:20:38.016247   78126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.016336   78126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 22:20:38.030559   78126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 22:20:38.030943   78126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:20:38.030973   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:20:38.031037   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:20:38.031074   78126 start.go:340] cluster config:
	{Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:20:38.031174   78126 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 22:20:38.033049   78126 out.go:177] * Starting "old-k8s-version-323416" primary control-plane node in "old-k8s-version-323416" cluster
	I1011 22:20:39.118864   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:38.034171   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:20:38.034204   78126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 22:20:38.034212   78126 cache.go:56] Caching tarball of preloaded images
	I1011 22:20:38.034266   78126 preload.go:172] Found /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1011 22:20:38.034276   78126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1011 22:20:38.034361   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:20:38.034531   78126 start.go:360] acquireMachinesLock for old-k8s-version-323416: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:20:45.198865   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:48.270849   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:54.350871   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:20:57.422868   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:03.502801   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:06.574950   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:12.654900   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:15.726940   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:21.806892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:24.878947   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:30.958903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:34.030961   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:40.110909   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:43.182869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:49.262857   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:52.334903   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:21:58.414892   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:01.486914   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:07.566885   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:10.638888   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:16.718908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:19.790874   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:25.870893   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:28.942886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:35.022875   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:38.094889   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:44.174898   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:47.246907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:53.326869   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:22:56.398883   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:02.482839   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:05.550858   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:11.630908   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:14.702895   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:20.782925   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:23.854907   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:29.934886   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:33.006820   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:39.086906   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:42.158938   77373 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.55:22: connect: no route to host
	I1011 22:23:45.162974   77526 start.go:364] duration metric: took 4m27.722613931s to acquireMachinesLock for "embed-certs-223942"
	I1011 22:23:45.163058   77526 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:23:45.163081   77526 fix.go:54] fixHost starting: 
	I1011 22:23:45.163410   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:23:45.163459   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:23:45.178675   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1011 22:23:45.179157   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:23:45.179600   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:23:45.179620   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:23:45.179959   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:23:45.180200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:23:45.180348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:23:45.182134   77526 fix.go:112] recreateIfNeeded on embed-certs-223942: state=Stopped err=<nil>
	I1011 22:23:45.182159   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	W1011 22:23:45.182305   77526 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:23:45.184160   77526 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223942" ...
	I1011 22:23:45.185640   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Start
	I1011 22:23:45.185844   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring networks are active...
	I1011 22:23:45.186700   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network default is active
	I1011 22:23:45.187125   77526 main.go:141] libmachine: (embed-certs-223942) Ensuring network mk-embed-certs-223942 is active
	I1011 22:23:45.187499   77526 main.go:141] libmachine: (embed-certs-223942) Getting domain xml...
	I1011 22:23:45.188220   77526 main.go:141] libmachine: (embed-certs-223942) Creating domain...
	I1011 22:23:46.400681   77526 main.go:141] libmachine: (embed-certs-223942) Waiting to get IP...
	I1011 22:23:46.401694   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.402146   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.402226   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.402142   78768 retry.go:31] will retry after 262.164449ms: waiting for machine to come up
	I1011 22:23:46.665716   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.666177   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.666204   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.666139   78768 retry.go:31] will retry after 264.99316ms: waiting for machine to come up
	I1011 22:23:46.932771   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:46.933128   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:46.933167   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:46.933084   78768 retry.go:31] will retry after 388.243159ms: waiting for machine to come up
	I1011 22:23:47.322648   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.323103   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.323165   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.323047   78768 retry.go:31] will retry after 374.999199ms: waiting for machine to come up
	I1011 22:23:45.160618   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:23:45.160654   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.160935   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:23:45.160960   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:23:45.161145   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:23:45.162838   77373 machine.go:96] duration metric: took 4m37.426000052s to provisionDockerMachine
	I1011 22:23:45.162876   77373 fix.go:56] duration metric: took 4m37.446804874s for fixHost
	I1011 22:23:45.162886   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 4m37.446840276s
	W1011 22:23:45.162906   77373 start.go:714] error starting host: provision: host is not running
	W1011 22:23:45.163008   77373 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1011 22:23:45.163018   77373 start.go:729] Will try again in 5 seconds ...
	I1011 22:23:47.699684   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:47.700088   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:47.700117   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:47.700031   78768 retry.go:31] will retry after 589.703952ms: waiting for machine to come up
	I1011 22:23:48.291928   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.292398   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.292422   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.292351   78768 retry.go:31] will retry after 671.971303ms: waiting for machine to come up
	I1011 22:23:48.966357   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:48.966772   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:48.966797   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:48.966738   78768 retry.go:31] will retry after 848.2726ms: waiting for machine to come up
	I1011 22:23:49.816735   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:49.817155   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:49.817181   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:49.817116   78768 retry.go:31] will retry after 941.163438ms: waiting for machine to come up
	I1011 22:23:50.759625   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:50.760052   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:50.760095   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:50.759996   78768 retry.go:31] will retry after 1.225047114s: waiting for machine to come up
	I1011 22:23:51.987349   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:51.987788   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:51.987817   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:51.987737   78768 retry.go:31] will retry after 2.184212352s: waiting for machine to come up
	I1011 22:23:50.165493   77373 start.go:360] acquireMachinesLock for no-preload-390487: {Name:mkb431367b7f497f2f0dc5fb797bc835aa38c7d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 22:23:54.173125   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:54.173564   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:54.173595   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:54.173503   78768 retry.go:31] will retry after 2.000096312s: waiting for machine to come up
	I1011 22:23:56.176004   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:56.176458   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:56.176488   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:56.176403   78768 retry.go:31] will retry after 3.062345768s: waiting for machine to come up
	I1011 22:23:59.239982   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:23:59.240426   77526 main.go:141] libmachine: (embed-certs-223942) DBG | unable to find current IP address of domain embed-certs-223942 in network mk-embed-certs-223942
	I1011 22:23:59.240452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | I1011 22:23:59.240386   78768 retry.go:31] will retry after 4.019746049s: waiting for machine to come up
	I1011 22:24:04.643399   77741 start.go:364] duration metric: took 4m21.087318573s to acquireMachinesLock for "default-k8s-diff-port-070708"
	I1011 22:24:04.643463   77741 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:04.643471   77741 fix.go:54] fixHost starting: 
	I1011 22:24:04.643903   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:04.643950   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:04.660647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1011 22:24:04.661106   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:04.661603   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:24:04.661627   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:04.661966   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:04.662148   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:04.662392   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:24:04.664004   77741 fix.go:112] recreateIfNeeded on default-k8s-diff-port-070708: state=Stopped err=<nil>
	I1011 22:24:04.664048   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	W1011 22:24:04.664205   77741 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:04.666462   77741 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-070708" ...
	I1011 22:24:03.263908   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264434   77526 main.go:141] libmachine: (embed-certs-223942) Found IP for machine: 192.168.72.238
	I1011 22:24:03.264467   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has current primary IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.264476   77526 main.go:141] libmachine: (embed-certs-223942) Reserving static IP address...
	I1011 22:24:03.264932   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.264964   77526 main.go:141] libmachine: (embed-certs-223942) Reserved static IP address: 192.168.72.238
	I1011 22:24:03.264984   77526 main.go:141] libmachine: (embed-certs-223942) DBG | skip adding static IP to network mk-embed-certs-223942 - found existing host DHCP lease matching {name: "embed-certs-223942", mac: "52:54:00:06:2c:1c", ip: "192.168.72.238"}
	I1011 22:24:03.264995   77526 main.go:141] libmachine: (embed-certs-223942) Waiting for SSH to be available...
	I1011 22:24:03.265018   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Getting to WaitForSSH function...
	I1011 22:24:03.267171   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267556   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.267594   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.267682   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH client type: external
	I1011 22:24:03.267720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa (-rw-------)
	I1011 22:24:03.267747   77526 main.go:141] libmachine: (embed-certs-223942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:03.267760   77526 main.go:141] libmachine: (embed-certs-223942) DBG | About to run SSH command:
	I1011 22:24:03.267767   77526 main.go:141] libmachine: (embed-certs-223942) DBG | exit 0
	I1011 22:24:03.390641   77526 main.go:141] libmachine: (embed-certs-223942) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:03.390996   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetConfigRaw
	I1011 22:24:03.391600   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.393909   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394224   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.394267   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.394510   77526 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/config.json ...
	I1011 22:24:03.394735   77526 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:03.394754   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:03.394941   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.396974   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397280   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.397298   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.397414   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.397577   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397724   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.397856   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.398095   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.398276   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.398285   77526 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:03.503029   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:03.503063   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503282   77526 buildroot.go:166] provisioning hostname "embed-certs-223942"
	I1011 22:24:03.503301   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.503503   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.505943   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506300   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.506325   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.506444   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.506595   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506769   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.506899   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.507087   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.507247   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.507261   77526 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223942 && echo "embed-certs-223942" | sudo tee /etc/hostname
	I1011 22:24:03.626937   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223942
	
	I1011 22:24:03.626970   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.629752   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630038   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.630067   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.630194   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:03.630370   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:03.630665   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:03.630805   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:03.630988   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:03.631011   77526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223942/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:03.744196   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:03.744224   77526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:03.744247   77526 buildroot.go:174] setting up certificates
	I1011 22:24:03.744258   77526 provision.go:84] configureAuth start
	I1011 22:24:03.744270   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetMachineName
	I1011 22:24:03.744535   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:03.747114   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747452   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.747479   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.747619   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:03.750238   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750626   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:03.750662   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:03.750801   77526 provision.go:143] copyHostCerts
	I1011 22:24:03.750867   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:03.750890   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:03.750970   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:03.751094   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:03.751108   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:03.751146   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:03.751246   77526 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:03.751257   77526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:03.751288   77526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:03.751360   77526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223942 san=[127.0.0.1 192.168.72.238 embed-certs-223942 localhost minikube]
	I1011 22:24:04.039983   77526 provision.go:177] copyRemoteCerts
	I1011 22:24:04.040046   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:04.040072   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.042846   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043130   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.043151   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.043339   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.043530   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.043689   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.043836   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.124533   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:04.148503   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 22:24:04.172199   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:04.195175   77526 provision.go:87] duration metric: took 450.888581ms to configureAuth
	I1011 22:24:04.195203   77526 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:04.195381   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:04.195446   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.197839   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198189   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.198269   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.198348   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.198561   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198730   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.198875   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.199041   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.199217   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.199237   77526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:04.411621   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:04.411653   77526 machine.go:96] duration metric: took 1.016905055s to provisionDockerMachine
	I1011 22:24:04.411667   77526 start.go:293] postStartSetup for "embed-certs-223942" (driver="kvm2")
	I1011 22:24:04.411680   77526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:04.411699   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.411977   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:04.412003   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.414381   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414679   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.414722   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.414835   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.415010   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.415144   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.415266   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.496916   77526 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:04.500935   77526 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:04.500956   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:04.501023   77526 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:04.501115   77526 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:04.501222   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:04.510266   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:04.537636   77526 start.go:296] duration metric: took 125.956397ms for postStartSetup
	I1011 22:24:04.537678   77526 fix.go:56] duration metric: took 19.374596283s for fixHost
	I1011 22:24:04.537698   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.540344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540719   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.540742   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.540838   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.541012   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541160   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.541316   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.541474   77526 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:04.541648   77526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I1011 22:24:04.541659   77526 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:04.643243   77526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685444.617606783
	
	I1011 22:24:04.643266   77526 fix.go:216] guest clock: 1728685444.617606783
	I1011 22:24:04.643273   77526 fix.go:229] Guest: 2024-10-11 22:24:04.617606783 +0000 UTC Remote: 2024-10-11 22:24:04.537682618 +0000 UTC m=+287.234553168 (delta=79.924165ms)
	I1011 22:24:04.643312   77526 fix.go:200] guest clock delta is within tolerance: 79.924165ms
	I1011 22:24:04.643320   77526 start.go:83] releasing machines lock for "embed-certs-223942", held for 19.480305529s
	I1011 22:24:04.643344   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.643569   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:04.646344   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646733   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.646766   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.646918   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647366   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647519   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:24:04.647644   77526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:04.647693   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.647723   77526 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:04.647748   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:24:04.649992   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650329   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650354   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650378   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650509   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.650676   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.650750   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:04.650773   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:04.650857   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.650959   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:24:04.651027   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.651081   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:24:04.651200   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:24:04.651313   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:24:04.756500   77526 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:04.762420   77526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:04.901155   77526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:04.908234   77526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:04.908304   77526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:04.929972   77526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:04.929999   77526 start.go:495] detecting cgroup driver to use...
	I1011 22:24:04.930069   77526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:04.946899   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:04.960670   77526 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:04.960739   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:04.973981   77526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:04.987444   77526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:05.103114   77526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:05.251587   77526 docker.go:233] disabling docker service ...
	I1011 22:24:05.251662   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:05.266087   77526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:05.279209   77526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:05.431467   77526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:05.571151   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:05.584813   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:05.603563   77526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:05.603632   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.614924   77526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:05.614979   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.627625   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.638259   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.651521   77526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:05.663937   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.674307   77526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.696935   77526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:05.707464   77526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:05.717338   77526 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:05.717416   77526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:05.737811   77526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:05.749453   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:05.888144   77526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:05.984321   77526 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:05.984382   77526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:05.989389   77526 start.go:563] Will wait 60s for crictl version
	I1011 22:24:05.989447   77526 ssh_runner.go:195] Run: which crictl
	I1011 22:24:05.993333   77526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:06.033281   77526 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:06.033366   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.062164   77526 ssh_runner.go:195] Run: crio --version
	I1011 22:24:06.092927   77526 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:06.094094   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetIP
	I1011 22:24:06.097442   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.097893   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:24:06.097941   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:24:06.098179   77526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:06.102566   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:06.116183   77526 kubeadm.go:883] updating cluster {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:06.116297   77526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:06.116347   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:06.164193   77526 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:06.164272   77526 ssh_runner.go:195] Run: which lz4
	I1011 22:24:06.168557   77526 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:06.173131   77526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:06.173165   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:04.667909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Start
	I1011 22:24:04.668056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring networks are active...
	I1011 22:24:04.668688   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network default is active
	I1011 22:24:04.668985   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Ensuring network mk-default-k8s-diff-port-070708 is active
	I1011 22:24:04.669312   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Getting domain xml...
	I1011 22:24:04.669964   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Creating domain...
	I1011 22:24:05.931094   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting to get IP...
	I1011 22:24:05.932142   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932635   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:05.932711   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:05.932622   78901 retry.go:31] will retry after 199.659438ms: waiting for machine to come up
	I1011 22:24:06.134036   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134479   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.134504   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.134439   78901 retry.go:31] will retry after 379.083732ms: waiting for machine to come up
	I1011 22:24:06.515118   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515656   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.515686   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.515599   78901 retry.go:31] will retry after 302.733712ms: waiting for machine to come up
	I1011 22:24:06.820188   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820629   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:06.820657   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:06.820579   78901 retry.go:31] will retry after 466.059846ms: waiting for machine to come up
	I1011 22:24:07.288837   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.289371   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.289302   78901 retry.go:31] will retry after 551.760501ms: waiting for machine to come up
	I1011 22:24:07.843026   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843561   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:07.843590   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:07.843517   78901 retry.go:31] will retry after 626.896356ms: waiting for machine to come up
	I1011 22:24:07.621882   77526 crio.go:462] duration metric: took 1.453355137s to copy over tarball
	I1011 22:24:07.621973   77526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:09.732789   77526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110786947s)
	I1011 22:24:09.732823   77526 crio.go:469] duration metric: took 2.110914695s to extract the tarball
	I1011 22:24:09.732831   77526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:09.768649   77526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:09.811856   77526 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:09.811881   77526 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:09.811890   77526 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I1011 22:24:09.811991   77526 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:09.812087   77526 ssh_runner.go:195] Run: crio config
	I1011 22:24:09.857847   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:09.857869   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:09.857877   77526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:09.857896   77526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223942 NodeName:embed-certs-223942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:09.858025   77526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223942"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:09.858082   77526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:09.868276   77526 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:09.868346   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:09.877682   77526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1011 22:24:09.894551   77526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:09.911181   77526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1011 22:24:09.927972   77526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:09.931799   77526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:09.943650   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:10.071890   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:10.089627   77526 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942 for IP: 192.168.72.238
	I1011 22:24:10.089658   77526 certs.go:194] generating shared ca certs ...
	I1011 22:24:10.089680   77526 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:10.089851   77526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:10.089905   77526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:10.089916   77526 certs.go:256] generating profile certs ...
	I1011 22:24:10.090038   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/client.key
	I1011 22:24:10.090121   77526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key.0dabc30d
	I1011 22:24:10.090163   77526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key
	I1011 22:24:10.090323   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:10.090354   77526 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:10.090364   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:10.090392   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:10.090415   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:10.090438   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:10.090476   77526 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:10.091225   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:10.117879   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:10.169586   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:10.210385   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:10.245240   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 22:24:10.274354   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:10.299943   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:10.324265   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/embed-certs-223942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:10.347352   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:10.370252   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:10.393715   77526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:10.420103   77526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:10.436668   77526 ssh_runner.go:195] Run: openssl version
	I1011 22:24:10.442525   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:10.453055   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457461   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.457520   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:10.463121   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:10.473623   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:10.483653   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488022   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.488075   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:10.493553   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:10.503833   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:10.514171   77526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518935   77526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.518983   77526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:10.524479   77526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:10.534942   77526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:10.539385   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:10.545178   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:10.550886   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:10.556533   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:10.562024   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:10.567514   77526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:10.573018   77526 kubeadm.go:392] StartCluster: {Name:embed-certs-223942 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-223942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:10.573136   77526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:10.573206   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.609216   77526 cri.go:89] found id: ""
	I1011 22:24:10.609291   77526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:10.619945   77526 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:10.619976   77526 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:10.620024   77526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:10.629748   77526 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:10.631292   77526 kubeconfig.go:125] found "embed-certs-223942" server: "https://192.168.72.238:8443"
	I1011 22:24:10.634516   77526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:10.644773   77526 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.238
	I1011 22:24:10.644805   77526 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:10.644821   77526 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:10.644874   77526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:10.680074   77526 cri.go:89] found id: ""
	I1011 22:24:10.680146   77526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:10.696118   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:10.705765   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:10.705789   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:10.705845   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:10.714771   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:10.714837   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:10.724255   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:10.733433   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:10.733490   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:10.742649   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.751287   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:10.751350   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:10.760572   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:10.769447   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:10.769517   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:10.778829   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:10.788208   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:10.900288   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.733461   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:11.929225   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.001383   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:12.093971   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:12.094053   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:08.471765   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472154   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:08.472178   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:08.472099   78901 retry.go:31] will retry after 1.132732814s: waiting for machine to come up
	I1011 22:24:09.606499   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607030   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:09.607056   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:09.606975   78901 retry.go:31] will retry after 1.289031778s: waiting for machine to come up
	I1011 22:24:10.897474   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.897980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:10.898005   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:10.897925   78901 retry.go:31] will retry after 1.601197893s: waiting for machine to come up
	I1011 22:24:12.500563   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501072   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:12.501100   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:12.501018   78901 retry.go:31] will retry after 1.772496409s: waiting for machine to come up
	I1011 22:24:12.594492   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.094823   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:13.594502   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.095004   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:14.109230   77526 api_server.go:72] duration metric: took 2.015258789s to wait for apiserver process to appear ...
	I1011 22:24:14.109265   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:14.109291   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.439696   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.439731   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.439747   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.515797   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:16.515834   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:16.610033   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:16.620048   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:16.620093   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.109593   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.116698   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.116729   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:17.609486   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:17.628000   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:17.628031   77526 api_server.go:103] status: https://192.168.72.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:18.109663   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:24:18.115996   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:24:18.121780   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:18.121806   77526 api_server.go:131] duration metric: took 4.012533784s to wait for apiserver health ...
	I1011 22:24:18.121816   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:24:18.121823   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:18.123838   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:14.275892   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276364   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:14.276391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:14.276305   78901 retry.go:31] will retry after 2.71082021s: waiting for machine to come up
	I1011 22:24:16.989033   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989560   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:16.989591   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:16.989521   78901 retry.go:31] will retry after 2.502509628s: waiting for machine to come up
	I1011 22:24:18.125325   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:18.137257   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:18.154806   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:18.164291   77526 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:18.164318   77526 system_pods.go:61] "coredns-7c65d6cfc9-w8zgx" [4a8fab25-6b1a-424f-982c-2def533eb1ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:18.164325   77526 system_pods.go:61] "etcd-embed-certs-223942" [95c77be2-4ed2-45b5-b1ad-abbd3bc6de78] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:18.164332   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [51fd81a8-25e1-4d2f-b6dc-42e1b277de54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:18.164338   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [17eda746-891b-44aa-800c-fabd818db753] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:18.164357   77526 system_pods.go:61] "kube-proxy-xz284" [a24b20d5-45dd-476c-8c91-07fd5cea511b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:18.164368   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [91bf2256-7d6e-4831-aab5-d59c4f801fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:18.164382   77526 system_pods.go:61] "metrics-server-6867b74b74-9xr4k" [fc1a267e-3cb7-40f6-8908-5b304f8f5b92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:18.164398   77526 system_pods.go:61] "storage-provisioner" [77ed79d9-66ba-4262-a972-e23ce8d1878c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:18.164412   77526 system_pods.go:74] duration metric: took 9.584328ms to wait for pod list to return data ...
	I1011 22:24:18.164421   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:18.167630   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:18.167650   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:18.167660   77526 node_conditions.go:105] duration metric: took 3.235822ms to run NodePressure ...
	I1011 22:24:18.167675   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:18.453597   77526 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457919   77526 kubeadm.go:739] kubelet initialised
	I1011 22:24:18.457937   77526 kubeadm.go:740] duration metric: took 4.320725ms waiting for restarted kubelet to initialise ...
	I1011 22:24:18.457944   77526 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:18.462432   77526 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.468402   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468426   77526 pod_ready.go:82] duration metric: took 5.974992ms for pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.468435   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "coredns-7c65d6cfc9-w8zgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.468441   77526 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.475031   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475048   77526 pod_ready.go:82] duration metric: took 6.600211ms for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.475056   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "etcd-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.475061   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:18.479729   77526 pod_ready.go:98] node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479748   77526 pod_ready.go:82] duration metric: took 4.679509ms for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	E1011 22:24:18.479756   77526 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-223942" hosting pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223942" has status "Ready":"False"
	I1011 22:24:18.479762   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:20.487624   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:19.494990   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495353   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | unable to find current IP address of domain default-k8s-diff-port-070708 in network mk-default-k8s-diff-port-070708
	I1011 22:24:19.495384   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | I1011 22:24:19.495311   78901 retry.go:31] will retry after 2.761894966s: waiting for machine to come up
	I1011 22:24:22.260471   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has current primary IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.260931   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Found IP for machine: 192.168.39.162
	I1011 22:24:22.260960   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserving static IP address...
	I1011 22:24:22.261363   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Reserved static IP address: 192.168.39.162
	I1011 22:24:22.261401   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.261416   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Waiting for SSH to be available...
	I1011 22:24:22.261457   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | skip adding static IP to network mk-default-k8s-diff-port-070708 - found existing host DHCP lease matching {name: "default-k8s-diff-port-070708", mac: "52:54:00:9d:e0:21", ip: "192.168.39.162"}
	I1011 22:24:22.261493   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Getting to WaitForSSH function...
	I1011 22:24:22.263356   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263736   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.263769   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.263912   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH client type: external
	I1011 22:24:22.263936   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa (-rw-------)
	I1011 22:24:22.263959   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:22.263975   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | About to run SSH command:
	I1011 22:24:22.263991   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | exit 0
	I1011 22:24:22.391349   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:22.391744   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetConfigRaw
	I1011 22:24:22.392361   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.394582   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.394953   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.394987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.395205   77741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/config.json ...
	I1011 22:24:22.395391   77741 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:22.395408   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:22.395620   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.397851   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398185   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.398215   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.398339   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.398517   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398671   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.398810   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.398947   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.399226   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.399243   77741 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:22.506891   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:22.506929   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507220   77741 buildroot.go:166] provisioning hostname "default-k8s-diff-port-070708"
	I1011 22:24:22.507252   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.507437   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.510300   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510694   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.510728   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.510830   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.511016   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511165   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.511449   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.511588   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.511783   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.511800   77741 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-070708 && echo "default-k8s-diff-port-070708" | sudo tee /etc/hostname
	I1011 22:24:22.632639   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-070708
	
	I1011 22:24:22.632673   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.635224   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635536   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.635570   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.635709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:22.635881   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636018   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:22.636166   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:22.636312   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:22.636503   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:22.636521   77741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-070708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-070708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-070708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:22.751402   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:22.751434   77741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:22.751490   77741 buildroot.go:174] setting up certificates
	I1011 22:24:22.751505   77741 provision.go:84] configureAuth start
	I1011 22:24:22.751522   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetMachineName
	I1011 22:24:22.751753   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:22.754256   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754611   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.754661   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.754827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:22.756857   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757175   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:22.757207   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:22.757327   77741 provision.go:143] copyHostCerts
	I1011 22:24:22.757384   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:22.757405   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:22.757479   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:22.757577   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:22.757586   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:22.757607   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:22.757660   77741 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:22.757667   77741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:22.757683   77741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:22.757738   77741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-070708 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-070708 localhost minikube]
	I1011 22:24:23.136674   77741 provision.go:177] copyRemoteCerts
	I1011 22:24:23.136726   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:23.136751   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.139576   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.139909   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.139939   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.140104   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.140302   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.140446   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.140553   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.224552   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:23.248389   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1011 22:24:23.271533   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:24:23.294727   77741 provision.go:87] duration metric: took 543.206381ms to configureAuth
	I1011 22:24:23.294757   77741 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:23.295005   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:24:23.295092   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.297776   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298066   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.298102   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.298225   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.298447   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298609   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.298747   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.298880   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.299054   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.299068   77741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:23.763523   78126 start.go:364] duration metric: took 3m45.728960967s to acquireMachinesLock for "old-k8s-version-323416"
	I1011 22:24:23.763592   78126 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:23.763604   78126 fix.go:54] fixHost starting: 
	I1011 22:24:23.764012   78126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:23.764064   78126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:23.780495   78126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1011 22:24:23.780916   78126 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:23.781341   78126 main.go:141] libmachine: Using API Version  1
	I1011 22:24:23.781367   78126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:23.781706   78126 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:23.781899   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:23.782038   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetState
	I1011 22:24:23.783698   78126 fix.go:112] recreateIfNeeded on old-k8s-version-323416: state=Stopped err=<nil>
	I1011 22:24:23.783729   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	W1011 22:24:23.783867   78126 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:23.785701   78126 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-323416" ...
	I1011 22:24:23.522759   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:23.522787   77741 machine.go:96] duration metric: took 1.127384391s to provisionDockerMachine
	I1011 22:24:23.522801   77741 start.go:293] postStartSetup for "default-k8s-diff-port-070708" (driver="kvm2")
	I1011 22:24:23.522814   77741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:23.522834   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.523149   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:23.523186   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.526415   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.526905   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.526927   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.527101   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.527304   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.527442   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.527548   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.609520   77741 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:23.614158   77741 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:23.614183   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:23.614257   77741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:23.614349   77741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:23.614460   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:23.623839   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:23.649574   77741 start.go:296] duration metric: took 126.758615ms for postStartSetup
	I1011 22:24:23.649619   77741 fix.go:56] duration metric: took 19.006146927s for fixHost
	I1011 22:24:23.649643   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.652832   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653204   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.653234   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.653439   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.653633   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653815   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.653987   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.654158   77741 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:23.654348   77741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1011 22:24:23.654362   77741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:23.763396   77741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685463.735816087
	
	I1011 22:24:23.763417   77741 fix.go:216] guest clock: 1728685463.735816087
	I1011 22:24:23.763435   77741 fix.go:229] Guest: 2024-10-11 22:24:23.735816087 +0000 UTC Remote: 2024-10-11 22:24:23.649624165 +0000 UTC m=+280.235201903 (delta=86.191922ms)
	I1011 22:24:23.763454   77741 fix.go:200] guest clock delta is within tolerance: 86.191922ms
	I1011 22:24:23.763459   77741 start.go:83] releasing machines lock for "default-k8s-diff-port-070708", held for 19.120018362s
	I1011 22:24:23.763483   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.763750   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:23.766956   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767357   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.767399   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.767553   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768140   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768301   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:24:23.768388   77741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:23.768438   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.768496   77741 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:23.768518   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:24:23.771106   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771145   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771526   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771567   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:23.771589   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771605   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:23.771709   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771855   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.771901   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:24:23.771980   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772053   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:24:23.772102   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.772171   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:24:23.772276   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:24:23.883476   77741 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:23.889434   77741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:24.036410   77741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:24.042728   77741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:24.042805   77741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:24.059112   77741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:24.059137   77741 start.go:495] detecting cgroup driver to use...
	I1011 22:24:24.059201   77741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:24.075267   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:24.088163   77741 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:24.088228   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:24.106336   77741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:24.123084   77741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:24.242599   77741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:24.411075   77741 docker.go:233] disabling docker service ...
	I1011 22:24:24.411159   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:24.430632   77741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:24.447508   77741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:24.617156   77741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:24.761101   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:24.776604   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:24.799678   77741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:24:24.799738   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.811501   77741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:24.811576   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.822565   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.833103   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.843670   77741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:24.855800   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.868918   77741 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.886996   77741 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:24.897487   77741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:24.907215   77741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:24.907263   77741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:24.920391   77741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:24.931383   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:25.065929   77741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:25.164594   77741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:25.164663   77741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:25.169492   77741 start.go:563] Will wait 60s for crictl version
	I1011 22:24:25.169540   77741 ssh_runner.go:195] Run: which crictl
	I1011 22:24:25.173355   77741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:25.220778   77741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:25.220876   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.253354   77741 ssh_runner.go:195] Run: crio --version
	I1011 22:24:25.287095   77741 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:24:22.488407   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:24.988742   77526 pod_ready.go:103] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:23.787113   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .Start
	I1011 22:24:23.787249   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring networks are active...
	I1011 22:24:23.787826   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network default is active
	I1011 22:24:23.788130   78126 main.go:141] libmachine: (old-k8s-version-323416) Ensuring network mk-old-k8s-version-323416 is active
	I1011 22:24:23.788500   78126 main.go:141] libmachine: (old-k8s-version-323416) Getting domain xml...
	I1011 22:24:23.789138   78126 main.go:141] libmachine: (old-k8s-version-323416) Creating domain...
	I1011 22:24:25.096108   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting to get IP...
	I1011 22:24:25.097166   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.097577   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.097673   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.097564   79061 retry.go:31] will retry after 250.045756ms: waiting for machine to come up
	I1011 22:24:25.348971   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.349522   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.349539   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.349479   79061 retry.go:31] will retry after 291.538354ms: waiting for machine to come up
	I1011 22:24:25.642822   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.643367   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.643397   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.643328   79061 retry.go:31] will retry after 296.79454ms: waiting for machine to come up
	I1011 22:24:25.941846   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:25.942353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:25.942386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:25.942280   79061 retry.go:31] will retry after 565.277921ms: waiting for machine to come up
	I1011 22:24:26.508851   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:26.509541   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:26.509563   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:26.509493   79061 retry.go:31] will retry after 638.452301ms: waiting for machine to come up
	I1011 22:24:27.149411   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:27.149934   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:27.149962   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:27.149897   79061 retry.go:31] will retry after 901.814526ms: waiting for machine to come up
	I1011 22:24:25.288116   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetIP
	I1011 22:24:25.291001   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291345   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:24:25.291390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:24:25.291579   77741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:25.295805   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:25.308821   77741 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:25.308959   77741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:24:25.309019   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:25.353205   77741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:24:25.353271   77741 ssh_runner.go:195] Run: which lz4
	I1011 22:24:25.357765   77741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:25.362126   77741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:25.362168   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1011 22:24:26.741249   77741 crio.go:462] duration metric: took 1.383506027s to copy over tarball
	I1011 22:24:26.741392   77741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:27.486887   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.486911   77526 pod_ready.go:82] duration metric: took 9.007140273s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.486926   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492698   77526 pod_ready.go:93] pod "kube-proxy-xz284" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:27.492717   77526 pod_ready.go:82] duration metric: took 5.784843ms for pod "kube-proxy-xz284" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:27.492726   77526 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:29.499666   77526 pod_ready.go:103] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:32.137260   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:32.137292   77526 pod_ready.go:82] duration metric: took 4.644558899s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:32.137307   77526 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:28.053045   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.053498   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.053525   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.053455   79061 retry.go:31] will retry after 934.692712ms: waiting for machine to come up
	I1011 22:24:28.989425   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:28.989913   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:28.989940   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:28.989866   79061 retry.go:31] will retry after 943.893896ms: waiting for machine to come up
	I1011 22:24:29.934961   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:29.935438   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:29.935471   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:29.935383   79061 retry.go:31] will retry after 1.838944067s: waiting for machine to come up
	I1011 22:24:31.775696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:31.776161   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:31.776189   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:31.776112   79061 retry.go:31] will retry after 2.275313596s: waiting for machine to come up
	I1011 22:24:28.851248   77741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1098168s)
	I1011 22:24:28.851285   77741 crio.go:469] duration metric: took 2.109983801s to extract the tarball
	I1011 22:24:28.851294   77741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:28.888408   77741 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:28.933361   77741 crio.go:514] all images are preloaded for cri-o runtime.
	I1011 22:24:28.933384   77741 cache_images.go:84] Images are preloaded, skipping loading
	I1011 22:24:28.933391   77741 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.31.1 crio true true} ...
	I1011 22:24:28.933510   77741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-070708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:28.933589   77741 ssh_runner.go:195] Run: crio config
	I1011 22:24:28.982515   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:28.982541   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:28.982554   77741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:28.982582   77741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-070708 NodeName:default-k8s-diff-port-070708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:24:28.982781   77741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-070708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:28.982862   77741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:24:28.993780   77741 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:28.993846   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:29.005252   77741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1011 22:24:29.023922   77741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:29.042177   77741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 22:24:29.059529   77741 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:29.063600   77741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:29.078061   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:29.204249   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:29.221115   77741 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708 for IP: 192.168.39.162
	I1011 22:24:29.221141   77741 certs.go:194] generating shared ca certs ...
	I1011 22:24:29.221161   77741 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:29.221349   77741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:29.221402   77741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:29.221413   77741 certs.go:256] generating profile certs ...
	I1011 22:24:29.221493   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/client.key
	I1011 22:24:29.221568   77741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key.07f8f6d8
	I1011 22:24:29.221645   77741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key
	I1011 22:24:29.221767   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:29.221803   77741 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:29.221812   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:29.221832   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:29.221853   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:29.221872   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:29.221929   77741 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:29.222760   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:29.262636   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:29.308886   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:29.348949   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:29.378795   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1011 22:24:29.426593   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 22:24:29.465414   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:29.491216   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/default-k8s-diff-port-070708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 22:24:29.518262   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:29.542270   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:29.565664   77741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:29.588852   77741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:29.606630   77741 ssh_runner.go:195] Run: openssl version
	I1011 22:24:29.612594   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:29.623089   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627591   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.627656   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:29.633544   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:29.644199   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:29.654783   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661009   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.661061   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:29.668950   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:29.684757   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:29.700687   77741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705578   77741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.705646   77741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:29.711533   77741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:29.722714   77741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:29.727419   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:29.733494   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:29.739565   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:29.745569   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:29.751428   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:29.757368   77741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:29.763272   77741 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-070708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-070708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:29.763379   77741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:29.763436   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.805191   77741 cri.go:89] found id: ""
	I1011 22:24:29.805263   77741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:29.819025   77741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:29.819049   77741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:29.819098   77741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:29.828470   77741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:29.829347   77741 kubeconfig.go:125] found "default-k8s-diff-port-070708" server: "https://192.168.39.162:8444"
	I1011 22:24:29.831385   77741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:29.840601   77741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1011 22:24:29.840630   77741 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:29.840640   77741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:29.840691   77741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:29.880123   77741 cri.go:89] found id: ""
	I1011 22:24:29.880199   77741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:29.897250   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:29.908273   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:29.908293   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:29.908340   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:24:29.917052   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:29.917110   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:29.926121   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:24:29.935494   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:29.935552   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:29.944951   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.953829   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:29.953890   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:29.963554   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:24:29.972917   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:29.972979   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:29.981962   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:29.990859   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.116668   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:30.856369   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.204973   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.261641   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:31.313332   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:31.313450   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:31.814503   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.313812   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.813821   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:32.833106   77741 api_server.go:72] duration metric: took 1.519770408s to wait for apiserver process to appear ...
	I1011 22:24:32.833142   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:24:32.833166   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.028524   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.028557   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.028573   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.035621   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:24:35.035651   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:24:35.334128   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.339051   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.339075   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:35.833305   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:35.838821   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:24:35.838851   77741 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:24:36.333367   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:24:36.338371   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:24:36.344660   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:24:36.344684   77741 api_server.go:131] duration metric: took 3.511533712s to wait for apiserver health ...
	I1011 22:24:36.344694   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:24:36.344703   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:36.346229   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:24:34.148281   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:36.645574   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:34.052920   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:34.053279   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:34.053307   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:34.053236   79061 retry.go:31] will retry after 1.956752612s: waiting for machine to come up
	I1011 22:24:36.012353   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:36.012782   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:36.012808   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:36.012738   79061 retry.go:31] will retry after 2.836738921s: waiting for machine to come up
	I1011 22:24:36.347449   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:24:36.361278   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:24:36.384091   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:24:36.399422   77741 system_pods.go:59] 8 kube-system pods found
	I1011 22:24:36.399482   77741 system_pods.go:61] "coredns-7c65d6cfc9-bpv5v" [76f03ec1-b826-412f-8bb2-fcd555185dd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:24:36.399503   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [5f021850-47af-442e-81f9-fccf153afb5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:24:36.399521   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [12777485-8206-495d-9223-06574b1410a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:24:36.399557   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [4261e9f7-6e66-44d3-abbb-6fd541e62c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:24:36.399567   77741 system_pods.go:61] "kube-proxy-hsjth" [7ba3e685-be57-4e46-ac49-279bd32ca049] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:24:36.399575   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [1d170237-0bbe-4832-b5d2-cea7a11d5aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:24:36.399585   77741 system_pods.go:61] "metrics-server-6867b74b74-l7xbw" [998853a5-4215-4f3d-baa5-84e8f6bb91ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:24:36.399599   77741 system_pods.go:61] "storage-provisioner" [f618ffde-9d3a-43fd-999a-3855ac5de5d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:24:36.399612   77741 system_pods.go:74] duration metric: took 15.498192ms to wait for pod list to return data ...
	I1011 22:24:36.399627   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:24:36.403628   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:24:36.403652   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:24:36.403663   77741 node_conditions.go:105] duration metric: took 4.030681ms to run NodePressure ...
	I1011 22:24:36.403677   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:36.705101   77741 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710495   77741 kubeadm.go:739] kubelet initialised
	I1011 22:24:36.710514   77741 kubeadm.go:740] duration metric: took 5.389006ms waiting for restarted kubelet to initialise ...
	I1011 22:24:36.710521   77741 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:24:36.715511   77741 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:39.144299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.144365   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:38.851010   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:38.851388   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | unable to find current IP address of domain old-k8s-version-323416 in network mk-old-k8s-version-323416
	I1011 22:24:38.851415   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | I1011 22:24:38.851342   79061 retry.go:31] will retry after 4.138985465s: waiting for machine to come up
	I1011 22:24:38.723972   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:41.221423   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:43.222431   77741 pod_ready.go:103] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.627428   77373 start.go:364] duration metric: took 54.46189221s to acquireMachinesLock for "no-preload-390487"
	I1011 22:24:44.627494   77373 start.go:96] Skipping create...Using existing machine configuration
	I1011 22:24:44.627505   77373 fix.go:54] fixHost starting: 
	I1011 22:24:44.627904   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:24:44.627943   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:24:44.647097   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I1011 22:24:44.647594   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:24:44.648124   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:24:44.648149   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:24:44.648538   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:24:44.648719   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:24:44.648881   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:24:44.650660   77373 fix.go:112] recreateIfNeeded on no-preload-390487: state=Stopped err=<nil>
	I1011 22:24:44.650685   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	W1011 22:24:44.650829   77373 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 22:24:44.652887   77373 out.go:177] * Restarting existing kvm2 VM for "no-preload-390487" ...
	I1011 22:24:42.991764   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992136   78126 main.go:141] libmachine: (old-k8s-version-323416) Found IP for machine: 192.168.50.223
	I1011 22:24:42.992164   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has current primary IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.992178   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserving static IP address...
	I1011 22:24:42.992530   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.992560   78126 main.go:141] libmachine: (old-k8s-version-323416) Reserved static IP address: 192.168.50.223
	I1011 22:24:42.992573   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | skip adding static IP to network mk-old-k8s-version-323416 - found existing host DHCP lease matching {name: "old-k8s-version-323416", mac: "52:54:00:d4:30:4b", ip: "192.168.50.223"}
	I1011 22:24:42.992586   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Getting to WaitForSSH function...
	I1011 22:24:42.992602   78126 main.go:141] libmachine: (old-k8s-version-323416) Waiting for SSH to be available...
	I1011 22:24:42.994890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995219   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:42.995252   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:42.995386   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH client type: external
	I1011 22:24:42.995408   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa (-rw-------)
	I1011 22:24:42.995448   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:24:42.995466   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | About to run SSH command:
	I1011 22:24:42.995479   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | exit 0
	I1011 22:24:43.126815   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | SSH cmd err, output: <nil>: 
	I1011 22:24:43.127190   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetConfigRaw
	I1011 22:24:43.127788   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.130218   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130685   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.130717   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.130923   78126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/config.json ...
	I1011 22:24:43.131103   78126 machine.go:93] provisionDockerMachine start ...
	I1011 22:24:43.131119   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:43.131334   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.133576   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.133881   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.133909   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.134025   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.134183   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134375   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.134503   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.134691   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.134908   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.134923   78126 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:24:43.247090   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:24:43.247127   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247359   78126 buildroot.go:166] provisioning hostname "old-k8s-version-323416"
	I1011 22:24:43.247399   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.247578   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.250241   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250523   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.250550   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.250692   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.250882   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251058   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.251195   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.251372   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.251563   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.251580   78126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-323416 && echo "old-k8s-version-323416" | sudo tee /etc/hostname
	I1011 22:24:43.378294   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-323416
	
	I1011 22:24:43.378332   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.381001   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381382   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.381409   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.381667   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.381896   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382099   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.382264   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.382459   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:43.382702   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:43.382729   78126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-323416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-323416/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-323416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:24:43.508062   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:24:43.508093   78126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:24:43.508119   78126 buildroot.go:174] setting up certificates
	I1011 22:24:43.508128   78126 provision.go:84] configureAuth start
	I1011 22:24:43.508136   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetMachineName
	I1011 22:24:43.508405   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:43.511193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511532   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.511569   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.511664   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.513696   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514103   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.514121   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.514263   78126 provision.go:143] copyHostCerts
	I1011 22:24:43.514319   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:24:43.514335   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:24:43.514394   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:24:43.514497   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:24:43.514508   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:24:43.514528   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:24:43.514586   78126 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:24:43.514593   78126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:24:43.514611   78126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:24:43.514689   78126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-323416 san=[127.0.0.1 192.168.50.223 localhost minikube old-k8s-version-323416]
	I1011 22:24:43.983601   78126 provision.go:177] copyRemoteCerts
	I1011 22:24:43.983672   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:24:43.983702   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:43.986580   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.986957   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:43.987002   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:43.987176   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:43.987389   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:43.987543   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:43.987669   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.073030   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:24:44.096925   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 22:24:44.120064   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 22:24:44.143446   78126 provision.go:87] duration metric: took 635.306658ms to configureAuth
	I1011 22:24:44.143474   78126 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:24:44.143670   78126 config.go:182] Loaded profile config "old-k8s-version-323416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1011 22:24:44.143754   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.146547   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.146890   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.146917   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.147065   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.147258   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147431   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.147577   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.147729   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.147893   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.147907   78126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:24:44.383524   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:24:44.383552   78126 machine.go:96] duration metric: took 1.252438211s to provisionDockerMachine
	I1011 22:24:44.383564   78126 start.go:293] postStartSetup for "old-k8s-version-323416" (driver="kvm2")
	I1011 22:24:44.383576   78126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:24:44.383613   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.383942   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:24:44.383974   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.386690   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387037   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.387073   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.387164   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.387340   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.387492   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.387605   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.472998   78126 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:24:44.477066   78126 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:24:44.477087   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:24:44.477157   78126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:24:44.477248   78126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:24:44.477350   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:24:44.486122   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:44.512625   78126 start.go:296] duration metric: took 129.045295ms for postStartSetup
	I1011 22:24:44.512665   78126 fix.go:56] duration metric: took 20.749062033s for fixHost
	I1011 22:24:44.512684   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.515428   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515731   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.515761   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.515969   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.516146   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516343   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.516512   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.516688   78126 main.go:141] libmachine: Using SSH client type: native
	I1011 22:24:44.516873   78126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I1011 22:24:44.516883   78126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:24:44.627298   78126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685484.587419742
	
	I1011 22:24:44.627325   78126 fix.go:216] guest clock: 1728685484.587419742
	I1011 22:24:44.627333   78126 fix.go:229] Guest: 2024-10-11 22:24:44.587419742 +0000 UTC Remote: 2024-10-11 22:24:44.512668977 +0000 UTC m=+246.616272114 (delta=74.750765ms)
	I1011 22:24:44.627352   78126 fix.go:200] guest clock delta is within tolerance: 74.750765ms
	I1011 22:24:44.627357   78126 start.go:83] releasing machines lock for "old-k8s-version-323416", held for 20.863791567s
	I1011 22:24:44.627382   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.627627   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:44.630473   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.630840   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.630883   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.631027   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631479   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631651   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .DriverName
	I1011 22:24:44.631724   78126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:24:44.631775   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.631836   78126 ssh_runner.go:195] Run: cat /version.json
	I1011 22:24:44.631861   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHHostname
	I1011 22:24:44.634396   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634582   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634827   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.634855   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.634988   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:44.635025   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:44.635031   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635218   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635234   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHPort
	I1011 22:24:44.635363   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635376   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHKeyPath
	I1011 22:24:44.635607   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetSSHUsername
	I1011 22:24:44.635596   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.635744   78126 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/old-k8s-version-323416/id_rsa Username:docker}
	I1011 22:24:44.723765   78126 ssh_runner.go:195] Run: systemctl --version
	I1011 22:24:44.751240   78126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:24:44.905226   78126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:24:44.911441   78126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:24:44.911528   78126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:24:44.928617   78126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:24:44.928641   78126 start.go:495] detecting cgroup driver to use...
	I1011 22:24:44.928706   78126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:24:44.948383   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:24:44.964079   78126 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:24:44.964150   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:24:44.977682   78126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:24:44.991696   78126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:24:45.106675   78126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:24:45.248931   78126 docker.go:233] disabling docker service ...
	I1011 22:24:45.248997   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:24:45.264270   78126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:24:45.278244   78126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:24:45.420352   78126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:24:45.565322   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:24:45.588948   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:24:45.607175   78126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1011 22:24:45.607248   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.617910   78126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:24:45.617967   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.628282   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.640254   78126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:24:45.654145   78126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:24:45.666230   78126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:24:45.676158   78126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:24:45.676239   78126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:24:45.693629   78126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:24:45.705255   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:45.842374   78126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:24:45.956273   78126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:24:45.956338   78126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:24:45.961381   78126 start.go:563] Will wait 60s for crictl version
	I1011 22:24:45.961427   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:45.965381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:24:46.012843   78126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:24:46.012932   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.042492   78126 ssh_runner.go:195] Run: crio --version
	I1011 22:24:46.075464   78126 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1011 22:24:43.144430   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:45.645398   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:44.654550   77373 main.go:141] libmachine: (no-preload-390487) Calling .Start
	I1011 22:24:44.654840   77373 main.go:141] libmachine: (no-preload-390487) Ensuring networks are active...
	I1011 22:24:44.655546   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network default is active
	I1011 22:24:44.656008   77373 main.go:141] libmachine: (no-preload-390487) Ensuring network mk-no-preload-390487 is active
	I1011 22:24:44.656383   77373 main.go:141] libmachine: (no-preload-390487) Getting domain xml...
	I1011 22:24:44.657065   77373 main.go:141] libmachine: (no-preload-390487) Creating domain...
	I1011 22:24:45.980644   77373 main.go:141] libmachine: (no-preload-390487) Waiting to get IP...
	I1011 22:24:45.981635   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:45.982101   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:45.982167   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:45.982078   79243 retry.go:31] will retry after 195.443447ms: waiting for machine to come up
	I1011 22:24:46.179539   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.179999   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.180030   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.179953   79243 retry.go:31] will retry after 322.117828ms: waiting for machine to come up
	I1011 22:24:46.503434   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.503947   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.503969   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.503915   79243 retry.go:31] will retry after 295.160677ms: waiting for machine to come up
	I1011 22:24:46.801184   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:46.801763   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:46.801797   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:46.801716   79243 retry.go:31] will retry after 396.903731ms: waiting for machine to come up
	I1011 22:24:47.200047   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.200515   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.200543   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.200480   79243 retry.go:31] will retry after 750.816077ms: waiting for machine to come up
	I1011 22:24:46.076724   78126 main.go:141] libmachine: (old-k8s-version-323416) Calling .GetIP
	I1011 22:24:46.079799   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080193   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:30:4b", ip: ""} in network mk-old-k8s-version-323416: {Iface:virbr3 ExpiryTime:2024-10-11 23:24:35 +0000 UTC Type:0 Mac:52:54:00:d4:30:4b Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:old-k8s-version-323416 Clientid:01:52:54:00:d4:30:4b}
	I1011 22:24:46.080222   78126 main.go:141] libmachine: (old-k8s-version-323416) DBG | domain old-k8s-version-323416 has defined IP address 192.168.50.223 and MAC address 52:54:00:d4:30:4b in network mk-old-k8s-version-323416
	I1011 22:24:46.080448   78126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1011 22:24:46.085097   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:46.101031   78126 kubeadm.go:883] updating cluster {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:24:46.101175   78126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 22:24:46.101231   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:46.151083   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:46.151160   78126 ssh_runner.go:195] Run: which lz4
	I1011 22:24:46.155976   78126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 22:24:46.161849   78126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 22:24:46.161887   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1011 22:24:47.857363   78126 crio.go:462] duration metric: took 1.701437717s to copy over tarball
	I1011 22:24:47.857437   78126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 22:24:44.735539   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:44.735561   77741 pod_ready.go:82] duration metric: took 8.020026677s for pod "coredns-7c65d6cfc9-bpv5v" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:44.735576   77741 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:46.744354   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:48.144609   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:50.149053   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:47.952867   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:47.953464   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:47.953495   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:47.953288   79243 retry.go:31] will retry after 639.218351ms: waiting for machine to come up
	I1011 22:24:48.594034   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:48.594428   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:48.594484   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:48.594409   79243 retry.go:31] will retry after 884.81772ms: waiting for machine to come up
	I1011 22:24:49.480960   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:49.481335   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:49.481362   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:49.481290   79243 retry.go:31] will retry after 1.298501886s: waiting for machine to come up
	I1011 22:24:50.781446   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:50.781854   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:50.781878   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:50.781800   79243 retry.go:31] will retry after 1.856156849s: waiting for machine to come up
	I1011 22:24:50.866896   78126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009433722s)
	I1011 22:24:50.866923   78126 crio.go:469] duration metric: took 3.009532765s to extract the tarball
	I1011 22:24:50.866932   78126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 22:24:50.910428   78126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:24:50.952694   78126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1011 22:24:50.952720   78126 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.952804   78126 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1011 22:24:50.952873   78126 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.952900   78126 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.952866   78126 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:50.953009   78126 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.953018   78126 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.952819   78126 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1011 22:24:50.954764   78126 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:50.954743   78126 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:50.954806   78126 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:50.954737   78126 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:50.954749   78126 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.101548   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.102871   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.131961   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.170382   78126 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1011 22:24:51.170443   78126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.170497   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.188058   78126 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1011 22:24:51.188105   78126 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.188157   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212419   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.212445   78126 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1011 22:24:51.212672   78126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.212706   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.212452   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.241873   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.268090   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.273835   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.295065   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.302000   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.349867   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.404922   78126 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1011 22:24:51.404977   78126 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1011 22:24:51.404990   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.405020   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.405026   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1011 22:24:51.405079   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1011 22:24:51.416864   78126 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1011 22:24:51.416911   78126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.416963   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.507248   78126 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1011 22:24:51.507290   78126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.507333   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.517540   78126 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1011 22:24:51.517585   78126 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.517634   78126 ssh_runner.go:195] Run: which crictl
	I1011 22:24:51.538443   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1011 22:24:51.538548   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1011 22:24:51.538561   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.538602   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.538632   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1011 22:24:51.541246   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.541325   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.610700   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.651283   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1011 22:24:51.651304   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.651382   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.656433   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.693381   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1011 22:24:51.732685   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1011 22:24:51.748942   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1011 22:24:51.754714   78126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1011 22:24:51.789584   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1011 22:24:51.811640   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1011 22:24:51.832201   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1011 22:24:51.835865   78126 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1011 22:24:52.082703   78126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:24:52.231170   78126 cache_images.go:92] duration metric: took 1.278430264s to LoadCachedImages
	W1011 22:24:52.231279   78126 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1011 22:24:52.231298   78126 kubeadm.go:934] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I1011 22:24:52.231407   78126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-323416 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:24:52.231491   78126 ssh_runner.go:195] Run: crio config
	I1011 22:24:52.286063   78126 cni.go:84] Creating CNI manager for ""
	I1011 22:24:52.286098   78126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:24:52.286112   78126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:24:52.286141   78126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-323416 NodeName:old-k8s-version-323416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 22:24:52.286333   78126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-323416"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:24:52.286445   78126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 22:24:52.296935   78126 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:24:52.297021   78126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:24:52.307375   78126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1011 22:24:52.324772   78126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:24:52.342241   78126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1011 22:24:52.361620   78126 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I1011 22:24:52.365823   78126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:24:52.378695   78126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:24:52.513087   78126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:24:52.532243   78126 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416 for IP: 192.168.50.223
	I1011 22:24:52.532267   78126 certs.go:194] generating shared ca certs ...
	I1011 22:24:52.532288   78126 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:52.532463   78126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:24:52.532532   78126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:24:52.532545   78126 certs.go:256] generating profile certs ...
	I1011 22:24:52.532659   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/client.key
	I1011 22:24:52.532730   78126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key.7ceeacb9
	I1011 22:24:52.532799   78126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key
	I1011 22:24:52.532957   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:24:52.532996   78126 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:24:52.533009   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:24:52.533040   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:24:52.533073   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:24:52.533105   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:24:52.533159   78126 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:24:52.533973   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:24:52.585384   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:24:52.619052   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:24:52.654607   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:24:52.696247   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 22:24:52.737090   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:24:52.773950   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:24:52.805647   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/old-k8s-version-323416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:24:52.835209   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:24:52.860239   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:24:52.887034   78126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:24:52.912600   78126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:24:52.930321   78126 ssh_runner.go:195] Run: openssl version
	I1011 22:24:49.242663   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:51.875476   77741 pod_ready.go:103] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:53.411915   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.411937   77741 pod_ready.go:82] duration metric: took 8.676353233s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.411950   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418808   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.418827   77741 pod_ready.go:82] duration metric: took 6.869777ms for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.418838   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428224   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.428257   77741 pod_ready.go:82] duration metric: took 9.411307ms for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.428270   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438263   77741 pod_ready.go:93] pod "kube-proxy-hsjth" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.438293   77741 pod_ready.go:82] duration metric: took 10.015779ms for pod "kube-proxy-hsjth" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.438307   77741 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444909   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:24:53.444932   77741 pod_ready.go:82] duration metric: took 6.618233ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:53.444943   77741 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	I1011 22:24:52.646299   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:55.144236   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:52.640024   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:52.640568   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:52.640600   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:52.640516   79243 retry.go:31] will retry after 1.634063154s: waiting for machine to come up
	I1011 22:24:54.275779   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:54.276278   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:54.276307   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:54.276222   79243 retry.go:31] will retry after 2.141763066s: waiting for machine to come up
	I1011 22:24:56.419913   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:56.420312   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:56.420333   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:56.420279   79243 retry.go:31] will retry after 3.322852036s: waiting for machine to come up
	I1011 22:24:52.936979   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:24:52.948202   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952898   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.952954   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:24:52.958929   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:24:52.969840   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:24:52.981062   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985800   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.985855   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:24:52.991763   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:24:53.002764   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:24:53.018419   78126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023755   78126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.023822   78126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:24:53.030938   78126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:24:53.042357   78126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:24:53.047975   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:24:53.054782   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:24:53.061070   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:24:53.067406   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:24:53.073639   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:24:53.079660   78126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:24:53.085866   78126 kubeadm.go:392] StartCluster: {Name:old-k8s-version-323416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-323416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:24:53.085983   78126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:24:53.086045   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.131849   78126 cri.go:89] found id: ""
	I1011 22:24:53.131924   78126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:24:53.143530   78126 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:24:53.143553   78126 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:24:53.143612   78126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:24:53.154098   78126 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:24:53.155495   78126 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-323416" does not appear in /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:24:53.156535   78126 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-11611/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-323416" cluster setting kubeconfig missing "old-k8s-version-323416" context setting]
	I1011 22:24:53.157948   78126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:24:53.272414   78126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:24:53.284659   78126 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.223
	I1011 22:24:53.284701   78126 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:24:53.284715   78126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:24:53.284774   78126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:24:53.330481   78126 cri.go:89] found id: ""
	I1011 22:24:53.330550   78126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:24:53.347638   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:24:53.357827   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:24:53.357851   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:24:53.357905   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:24:53.367762   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:24:53.367835   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:24:53.378586   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:24:53.388527   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:24:53.388615   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:24:53.398763   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.410888   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:24:53.410957   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:24:53.421858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:24:53.432325   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:24:53.432387   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:24:53.443445   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:24:53.455558   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:53.580407   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.549379   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.818476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:54.942636   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:24:55.067587   78126 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:24:55.067707   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.568499   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.068373   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:56.568700   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.068012   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:57.568734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:55.451196   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.452254   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:57.645338   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:00.142994   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.147083   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:24:59.745010   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:24:59.745433   77373 main.go:141] libmachine: (no-preload-390487) DBG | unable to find current IP address of domain no-preload-390487 in network mk-no-preload-390487
	I1011 22:24:59.745457   77373 main.go:141] libmachine: (no-preload-390487) DBG | I1011 22:24:59.745377   79243 retry.go:31] will retry after 4.379442156s: waiting for machine to come up
	I1011 22:24:58.068301   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:58.567894   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.067739   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.567954   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.068612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:00.568612   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.068565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:01.567861   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.067817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:02.568535   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:24:59.953903   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:02.451156   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:04.127900   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has current primary IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.128566   77373 main.go:141] libmachine: (no-preload-390487) Found IP for machine: 192.168.61.55
	I1011 22:25:04.128581   77373 main.go:141] libmachine: (no-preload-390487) Reserving static IP address...
	I1011 22:25:04.129112   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.129144   77373 main.go:141] libmachine: (no-preload-390487) DBG | skip adding static IP to network mk-no-preload-390487 - found existing host DHCP lease matching {name: "no-preload-390487", mac: "52:54:00:dc:7a:6d", ip: "192.168.61.55"}
	I1011 22:25:04.129157   77373 main.go:141] libmachine: (no-preload-390487) Reserved static IP address: 192.168.61.55
	I1011 22:25:04.129170   77373 main.go:141] libmachine: (no-preload-390487) Waiting for SSH to be available...
	I1011 22:25:04.129179   77373 main.go:141] libmachine: (no-preload-390487) DBG | Getting to WaitForSSH function...
	I1011 22:25:04.131402   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131668   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.131698   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.131864   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH client type: external
	I1011 22:25:04.131892   77373 main.go:141] libmachine: (no-preload-390487) DBG | Using SSH private key: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa (-rw-------)
	I1011 22:25:04.131922   77373 main.go:141] libmachine: (no-preload-390487) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1011 22:25:04.131936   77373 main.go:141] libmachine: (no-preload-390487) DBG | About to run SSH command:
	I1011 22:25:04.131950   77373 main.go:141] libmachine: (no-preload-390487) DBG | exit 0
	I1011 22:25:04.258578   77373 main.go:141] libmachine: (no-preload-390487) DBG | SSH cmd err, output: <nil>: 
	I1011 22:25:04.258971   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetConfigRaw
	I1011 22:25:04.259663   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.262128   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262510   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.262542   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.262838   77373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/config.json ...
	I1011 22:25:04.263066   77373 machine.go:93] provisionDockerMachine start ...
	I1011 22:25:04.263088   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:04.263316   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.265560   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.265843   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.265862   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.266086   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.266277   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266448   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.266597   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.266755   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.266968   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.266982   77373 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 22:25:04.375270   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 22:25:04.375306   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375541   77373 buildroot.go:166] provisioning hostname "no-preload-390487"
	I1011 22:25:04.375564   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.375718   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.378706   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379069   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.379091   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.379315   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.379515   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.379852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.380026   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.380213   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.380224   77373 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-390487 && echo "no-preload-390487" | sudo tee /etc/hostname
	I1011 22:25:04.503359   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-390487
	
	I1011 22:25:04.503392   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.506163   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506502   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.506537   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.506742   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.506924   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507077   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.507332   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.507483   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.507660   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.507676   77373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-390487' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-390487/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-390487' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 22:25:04.624804   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 22:25:04.624850   77373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19749-11611/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-11611/.minikube}
	I1011 22:25:04.624880   77373 buildroot.go:174] setting up certificates
	I1011 22:25:04.624893   77373 provision.go:84] configureAuth start
	I1011 22:25:04.624909   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetMachineName
	I1011 22:25:04.625190   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:04.627950   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628278   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.628320   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.628458   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.630686   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631012   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.631040   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.631168   77373 provision.go:143] copyHostCerts
	I1011 22:25:04.631234   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem, removing ...
	I1011 22:25:04.631255   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem
	I1011 22:25:04.631328   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/ca.pem (1082 bytes)
	I1011 22:25:04.631438   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem, removing ...
	I1011 22:25:04.631450   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem
	I1011 22:25:04.631488   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/cert.pem (1123 bytes)
	I1011 22:25:04.631564   77373 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem, removing ...
	I1011 22:25:04.631575   77373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem
	I1011 22:25:04.631600   77373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-11611/.minikube/key.pem (1679 bytes)
	I1011 22:25:04.631668   77373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem org=jenkins.no-preload-390487 san=[127.0.0.1 192.168.61.55 localhost minikube no-preload-390487]
	I1011 22:25:04.736741   77373 provision.go:177] copyRemoteCerts
	I1011 22:25:04.736802   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 22:25:04.736830   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.739358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739665   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.739695   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.739849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.740016   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.740156   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.740291   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:04.826024   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 22:25:04.851100   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 22:25:04.875010   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 22:25:04.899107   77373 provision.go:87] duration metric: took 274.198948ms to configureAuth
	I1011 22:25:04.899133   77373 buildroot.go:189] setting minikube options for container-runtime
	I1011 22:25:04.899323   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:25:04.899405   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:04.901744   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902079   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:04.902108   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:04.902320   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:04.902518   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902689   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:04.902911   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:04.903095   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:04.903284   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:04.903304   77373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1011 22:25:05.129377   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1011 22:25:05.129406   77373 machine.go:96] duration metric: took 866.326736ms to provisionDockerMachine
	I1011 22:25:05.129420   77373 start.go:293] postStartSetup for "no-preload-390487" (driver="kvm2")
	I1011 22:25:05.129435   77373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 22:25:05.129455   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.129768   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 22:25:05.129798   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.132216   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132539   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.132579   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.132703   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.132891   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.133039   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.133177   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.219144   77373 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 22:25:05.223510   77373 info.go:137] Remote host: Buildroot 2023.02.9
	I1011 22:25:05.223549   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/addons for local assets ...
	I1011 22:25:05.223634   77373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-11611/.minikube/files for local assets ...
	I1011 22:25:05.223728   77373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem -> 188142.pem in /etc/ssl/certs
	I1011 22:25:05.223837   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 22:25:05.234069   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:05.259266   77373 start.go:296] duration metric: took 129.829951ms for postStartSetup
	I1011 22:25:05.259313   77373 fix.go:56] duration metric: took 20.631808044s for fixHost
	I1011 22:25:05.259335   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.262071   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262313   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.262340   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.262493   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.262702   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.262899   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.263030   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.263243   77373 main.go:141] libmachine: Using SSH client type: native
	I1011 22:25:05.263425   77373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.55 22 <nil> <nil>}
	I1011 22:25:05.263470   77373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 22:25:05.367341   77373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728685505.320713090
	
	I1011 22:25:05.367368   77373 fix.go:216] guest clock: 1728685505.320713090
	I1011 22:25:05.367378   77373 fix.go:229] Guest: 2024-10-11 22:25:05.32071309 +0000 UTC Remote: 2024-10-11 22:25:05.259318089 +0000 UTC m=+357.684959787 (delta=61.395001ms)
	I1011 22:25:05.367397   77373 fix.go:200] guest clock delta is within tolerance: 61.395001ms
	I1011 22:25:05.367409   77373 start.go:83] releasing machines lock for "no-preload-390487", held for 20.739943225s
	I1011 22:25:05.367428   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.367673   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:05.370276   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370611   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.370648   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.370815   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371423   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371608   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:25:05.371674   77373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 22:25:05.371726   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.371914   77373 ssh_runner.go:195] Run: cat /version.json
	I1011 22:25:05.371939   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:25:05.374358   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374730   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.374764   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374794   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.374915   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375073   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375227   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375232   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:05.375256   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:05.375342   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.375449   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:25:05.375560   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:25:05.375714   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:25:05.375819   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:25:05.482886   77373 ssh_runner.go:195] Run: systemctl --version
	I1011 22:25:05.489351   77373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1011 22:25:05.643786   77373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 22:25:05.650229   77373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 22:25:05.650296   77373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 22:25:05.666494   77373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 22:25:05.666522   77373 start.go:495] detecting cgroup driver to use...
	I1011 22:25:05.666582   77373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 22:25:05.683659   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 22:25:05.697066   77373 docker.go:217] disabling cri-docker service (if available) ...
	I1011 22:25:05.697119   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 22:25:05.712780   77373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 22:25:05.728824   77373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 22:25:05.844693   77373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 22:25:06.021006   77373 docker.go:233] disabling docker service ...
	I1011 22:25:06.021064   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 22:25:06.035844   77373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 22:25:06.049585   77373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 22:25:06.194294   77373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 22:25:06.333778   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 22:25:06.349522   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 22:25:06.370214   77373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1011 22:25:06.370285   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.380680   77373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1011 22:25:06.380751   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.390974   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.402539   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.414129   77373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 22:25:06.425521   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.435647   77373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.453454   77373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1011 22:25:06.463564   77373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 22:25:06.473487   77373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1011 22:25:06.473560   77373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1011 22:25:06.487972   77373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 22:25:06.498579   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:06.626975   77373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1011 22:25:06.736608   77373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1011 22:25:06.736681   77373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1011 22:25:06.742858   77373 start.go:563] Will wait 60s for crictl version
	I1011 22:25:06.742916   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:06.746699   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 22:25:06.785073   77373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1011 22:25:06.785172   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.812373   77373 ssh_runner.go:195] Run: crio --version
	I1011 22:25:06.842453   77373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1011 22:25:04.645257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.143877   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.843849   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetIP
	I1011 22:25:06.846526   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.846822   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:25:06.846870   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:25:06.847073   77373 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1011 22:25:06.851361   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:06.864316   77373 kubeadm.go:883] updating cluster {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 22:25:06.864426   77373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 22:25:06.864455   77373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 22:25:06.904225   77373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1011 22:25:06.904253   77373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 22:25:06.904307   77373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:06.904342   77373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.904360   77373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.904376   77373 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.904363   77373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.904475   77373 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.904499   77373 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 22:25:06.904480   77373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:06.905783   77373 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:06.905694   77373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:06.905680   77373 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 22:25:06.905679   77373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:06.905686   77373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:06.905688   77373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:07.057329   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.060095   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.080674   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1011 22:25:07.081598   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.085905   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.097740   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.106415   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.136780   77373 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1011 22:25:07.136834   77373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.136888   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.152692   77373 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1011 22:25:07.152730   77373 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.152784   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341838   77373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1011 22:25:07.341882   77373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.341890   77373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1011 22:25:07.341916   77373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.341929   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341947   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.341973   77373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1011 22:25:07.341998   77373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1011 22:25:07.342007   77373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.342041   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.342014   77373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.342058   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.342053   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.342099   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:07.355230   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.355409   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.439441   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.439572   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.439515   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.444043   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:07.444071   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.578269   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.578424   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1011 22:25:07.580474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1011 22:25:07.580516   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1011 22:25:07.580535   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.580606   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 22:25:03.067731   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:03.568585   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.068609   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.568185   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.068642   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:05.568550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.068167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:06.568139   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.068510   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:07.568592   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:04.451555   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:06.951138   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:09.144842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:11.643405   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:07.697848   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 22:25:07.697957   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1011 22:25:07.697984   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.722151   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1011 22:25:07.722269   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:07.734336   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 22:25:07.734449   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:07.734475   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1011 22:25:07.734489   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1011 22:25:07.734500   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 22:25:07.734508   77373 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734541   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1011 22:25:07.734578   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:07.788345   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 22:25:07.788371   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1011 22:25:07.788446   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:07.816070   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1011 22:25:07.816308   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 22:25:07.816394   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:08.066781   77373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.943666   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.209065908s)
	I1011 22:25:09.943709   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1011 22:25:09.943750   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.20918304s)
	I1011 22:25:09.943771   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1011 22:25:09.943779   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.155317638s)
	I1011 22:25:09.943793   77373 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943796   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1011 22:25:09.943829   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.127421611s)
	I1011 22:25:09.943841   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1011 22:25:09.943848   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1011 22:25:09.943878   77373 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.877065002s)
	I1011 22:25:09.943925   77373 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 22:25:09.943968   77373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:09.944013   77373 ssh_runner.go:195] Run: which crictl
	I1011 22:25:08.067924   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.568493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.068539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:09.568400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.068320   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:10.568357   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.068164   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:11.568044   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.067762   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:12.568802   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:08.951973   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:10.953032   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.644601   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.645917   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:13.641438   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.697578704s)
	I1011 22:25:13.641519   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1011 22:25:13.641523   77373 ssh_runner.go:235] Completed: which crictl: (3.697489585s)
	I1011 22:25:13.641556   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641597   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1011 22:25:13.641598   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534810   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.893187916s)
	I1011 22:25:15.534865   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1011 22:25:15.534893   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.893219513s)
	I1011 22:25:15.534963   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:15.534898   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:15.535027   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1011 22:25:13.068749   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.568696   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.068736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:14.568121   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.068455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:15.568153   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.067815   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:16.568565   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.068252   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:17.567907   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:13.452229   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:15.951490   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.952280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:18.143828   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:20.144712   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:17.707389   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.172401078s)
	I1011 22:25:17.707420   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.172369128s)
	I1011 22:25:17.707443   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1011 22:25:17.707474   77373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:25:17.707476   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:17.707644   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1011 22:25:19.168147   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460475389s)
	I1011 22:25:19.168190   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1011 22:25:19.168156   77373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.460655676s)
	I1011 22:25:19.168221   77373 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168242   77373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 22:25:19.168276   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1011 22:25:19.168336   77373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.123906   77373 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.955605804s)
	I1011 22:25:21.123945   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1011 22:25:21.123991   77373 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.955631476s)
	I1011 22:25:21.124019   77373 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1011 22:25:21.124030   77373 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.124068   77373 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1011 22:25:21.773002   77373 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19749-11611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 22:25:21.773050   77373 cache_images.go:123] Successfully loaded all cached images
	I1011 22:25:21.773057   77373 cache_images.go:92] duration metric: took 14.868794284s to LoadCachedImages
	I1011 22:25:21.773074   77373 kubeadm.go:934] updating node { 192.168.61.55 8443 v1.31.1 crio true true} ...
	I1011 22:25:21.773185   77373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-390487 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 22:25:21.773265   77373 ssh_runner.go:195] Run: crio config
	I1011 22:25:21.821268   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:21.821291   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:21.821301   77373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 22:25:21.821321   77373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-390487 NodeName:no-preload-390487 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 22:25:21.821490   77373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-390487"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 22:25:21.821564   77373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 22:25:21.832830   77373 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 22:25:21.832905   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 22:25:21.842726   77373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1011 22:25:21.859739   77373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 22:25:21.876192   77373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1011 22:25:21.893366   77373 ssh_runner.go:195] Run: grep 192.168.61.55	control-plane.minikube.internal$ /etc/hosts
	I1011 22:25:21.897435   77373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 22:25:21.909840   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:25:22.021697   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:25:22.039163   77373 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487 for IP: 192.168.61.55
	I1011 22:25:22.039187   77373 certs.go:194] generating shared ca certs ...
	I1011 22:25:22.039207   77373 certs.go:226] acquiring lock for ca certs: {Name:mk63a26468b61aa1df3bbb7aec80d57f7808ea17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:25:22.039385   77373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key
	I1011 22:25:22.039449   77373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key
	I1011 22:25:22.039462   77373 certs.go:256] generating profile certs ...
	I1011 22:25:22.039587   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/client.key
	I1011 22:25:22.039668   77373 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key.6a466d38
	I1011 22:25:22.039713   77373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key
	I1011 22:25:22.039858   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem (1338 bytes)
	W1011 22:25:22.039901   77373 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814_empty.pem, impossibly tiny 0 bytes
	I1011 22:25:22.039912   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 22:25:22.039959   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/ca.pem (1082 bytes)
	I1011 22:25:22.040001   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/cert.pem (1123 bytes)
	I1011 22:25:22.040029   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/certs/key.pem (1679 bytes)
	I1011 22:25:22.040089   77373 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem (1708 bytes)
	I1011 22:25:22.040914   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 22:25:22.077604   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 22:25:22.133879   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 22:25:22.164886   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 22:25:22.197655   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1011 22:25:22.229594   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 22:25:22.264506   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 22:25:22.287571   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/no-preload-390487/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 22:25:22.310555   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/ssl/certs/188142.pem --> /usr/share/ca-certificates/188142.pem (1708 bytes)
	I1011 22:25:22.333333   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 22:25:22.356094   77373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-11611/.minikube/certs/18814.pem --> /usr/share/ca-certificates/18814.pem (1338 bytes)
	I1011 22:25:22.380156   77373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 22:25:22.398056   77373 ssh_runner.go:195] Run: openssl version
	I1011 22:25:22.403799   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188142.pem && ln -fs /usr/share/ca-certificates/188142.pem /etc/ssl/certs/188142.pem"
	I1011 22:25:22.415645   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420352   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:12 /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.420411   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188142.pem
	I1011 22:25:22.426457   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188142.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 22:25:22.438182   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 22:25:22.449704   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454778   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.454840   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 22:25:22.460601   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 22:25:22.472587   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18814.pem && ln -fs /usr/share/ca-certificates/18814.pem /etc/ssl/certs/18814.pem"
	I1011 22:25:22.485096   77373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489673   77373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:12 /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.489729   77373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18814.pem
	I1011 22:25:22.495547   77373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18814.pem /etc/ssl/certs/51391683.0"
	I1011 22:25:22.507652   77373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 22:25:22.513081   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 22:25:22.519287   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 22:25:22.525159   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 22:25:22.531170   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 22:25:22.537321   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 22:25:22.543093   77373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 22:25:22.548832   77373 kubeadm.go:392] StartCluster: {Name:no-preload-390487 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-390487 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 22:25:22.548926   77373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1011 22:25:22.548972   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.594269   77373 cri.go:89] found id: ""
	I1011 22:25:22.594341   77373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 22:25:22.604950   77373 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 22:25:22.604976   77373 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 22:25:22.605025   77373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 22:25:18.067978   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:18.568737   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.068355   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:19.568389   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.068614   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.568167   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.068292   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:21.567868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.068163   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:22.568086   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:20.452376   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.950987   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.644866   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:25.143773   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.144243   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:22.615035   77373 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 22:25:22.615951   77373 kubeconfig.go:125] found "no-preload-390487" server: "https://192.168.61.55:8443"
	I1011 22:25:22.618000   77373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 22:25:22.628327   77373 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.55
	I1011 22:25:22.628367   77373 kubeadm.go:1160] stopping kube-system containers ...
	I1011 22:25:22.628379   77373 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1011 22:25:22.628426   77373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 22:25:22.681709   77373 cri.go:89] found id: ""
	I1011 22:25:22.681769   77373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 22:25:22.697989   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:25:22.707772   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:25:22.707792   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:25:22.707838   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:25:22.716928   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:25:22.716984   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:25:22.726327   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:25:22.735769   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:25:22.735819   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:25:22.745468   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.754493   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:25:22.754552   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:25:22.764062   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:25:22.773234   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:25:22.773298   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:25:22.782913   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:25:22.792119   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:22.910184   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:23.868070   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.095326   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.164924   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:24.251769   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:25:24.251852   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.752110   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.252591   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.278468   77373 api_server.go:72] duration metric: took 1.026698113s to wait for apiserver process to appear ...
	I1011 22:25:25.278498   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:25:25.278521   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:25.278974   77373 api_server.go:269] stopped: https://192.168.61.55:8443/healthz: Get "https://192.168.61.55:8443/healthz": dial tcp 192.168.61.55:8443: connect: connection refused
	I1011 22:25:25.778778   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:23.068201   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:23.567882   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.068482   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.567968   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.068574   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:25.568302   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.068650   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:26.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.068063   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:27.568322   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:24.951896   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:27.451534   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.012373   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.012412   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.012437   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.099444   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 22:25:28.099503   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 22:25:28.278723   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.284616   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.284647   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:28.779287   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:28.786100   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1011 22:25:28.786125   77373 api_server.go:103] status: https://192.168.61.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1011 22:25:29.278680   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:25:29.285168   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:25:29.291497   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:25:29.291526   77373 api_server.go:131] duration metric: took 4.013020818s to wait for apiserver health ...
	I1011 22:25:29.291537   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:25:29.291545   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:25:29.293325   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:25:29.644410   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:32.144466   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:29.294582   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:25:29.306107   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:25:29.331655   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:25:29.346931   77373 system_pods.go:59] 8 kube-system pods found
	I1011 22:25:29.346973   77373 system_pods.go:61] "coredns-7c65d6cfc9-5z4p5" [a369ddfd-01d5-4d2a-a63b-ab36b26f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:25:29.346986   77373 system_pods.go:61] "etcd-no-preload-390487" [b9aa7965-9be2-43b4-a291-246e5f27fa00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1011 22:25:29.346998   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [17e9a39a-2084-4504-8f9c-602cad87536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 22:25:29.347004   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [c4dc9017-6062-444e-b11f-23762dc5ef3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 22:25:29.347010   77373 system_pods.go:61] "kube-proxy-82p2c" [555091e0-b40d-49a6-a964-80baf143c001] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1011 22:25:29.347029   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [dcfc8186-23f5-4744-93f8-080180f93be6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 22:25:29.347034   77373 system_pods.go:61] "metrics-server-6867b74b74-tk8fq" [8fb649e0-2af0-4655-8251-356873e2213e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:25:29.347041   77373 system_pods.go:61] "storage-provisioner" [a01f8ac1-6d29-4885-86a7-c7ef0c289b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1011 22:25:29.347047   77373 system_pods.go:74] duration metric: took 15.369022ms to wait for pod list to return data ...
	I1011 22:25:29.347055   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:25:29.352543   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:25:29.352576   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:25:29.352590   77373 node_conditions.go:105] duration metric: took 5.52943ms to run NodePressure ...
	I1011 22:25:29.352613   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 22:25:29.648681   77373 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652653   77373 kubeadm.go:739] kubelet initialised
	I1011 22:25:29.652671   77373 kubeadm.go:740] duration metric: took 3.972281ms waiting for restarted kubelet to initialise ...
	I1011 22:25:29.652679   77373 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:25:29.658454   77373 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.663740   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663768   77373 pod_ready.go:82] duration metric: took 5.289381ms for pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.663780   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "coredns-7c65d6cfc9-5z4p5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.663791   77373 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.668667   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668693   77373 pod_ready.go:82] duration metric: took 4.892171ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.668704   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "etcd-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.668714   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.673134   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673157   77373 pod_ready.go:82] duration metric: took 4.432292ms for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.673168   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-apiserver-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.673177   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:29.734940   77373 pod_ready.go:98] node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734965   77373 pod_ready.go:82] duration metric: took 61.774649ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	E1011 22:25:29.734974   77373 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-390487" hosting pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-390487" has status "Ready":"False"
	I1011 22:25:29.734980   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134816   77373 pod_ready.go:93] pod "kube-proxy-82p2c" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:30.134843   77373 pod_ready.go:82] duration metric: took 399.851043ms for pod "kube-proxy-82p2c" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:30.134856   77373 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:32.143137   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:28.068561   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:28.568455   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.067742   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.567822   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.068410   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:30.568702   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.067710   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:31.568306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.067987   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:32.568699   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:29.451926   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:31.452961   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.145457   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.643721   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:34.143610   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:36.641435   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:33.068460   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.568303   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.068306   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:34.568071   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.068400   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:35.567953   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.068027   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:36.568341   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.068519   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:37.567799   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:33.951339   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:35.952408   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.450537   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.644336   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.144815   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:38.642041   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.143153   77373 pod_ready.go:103] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:41.641922   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:25:41.641949   77373 pod_ready.go:82] duration metric: took 11.507084936s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:41.641962   77373 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	I1011 22:25:38.067950   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:38.568116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.067734   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:39.567890   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.068391   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.568103   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.068168   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:41.567844   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.068152   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:42.568166   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:40.451326   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:42.451670   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.643191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.643486   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.648037   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:45.648090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:43.068478   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:43.567897   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.067812   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.568379   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.068030   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:45.568077   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.068431   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:46.568692   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.068182   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:47.568323   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:44.451907   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:46.950763   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.144086   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.144203   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.144498   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:47.649490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:50.148831   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:52.148997   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:48.067775   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:48.568667   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.068774   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.568581   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.068143   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:50.567817   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.067816   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:51.568577   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.068513   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:52.568483   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:49.451637   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:51.952434   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.643929   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.645968   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:54.149692   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.649774   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:53.068035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:53.568456   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.067825   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:54.567751   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:55.067899   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:55.067986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:55.106989   78126 cri.go:89] found id: ""
	I1011 22:25:55.107021   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.107029   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:55.107034   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:55.107082   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:55.145680   78126 cri.go:89] found id: ""
	I1011 22:25:55.145715   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.145727   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:55.145737   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:55.145803   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:55.180352   78126 cri.go:89] found id: ""
	I1011 22:25:55.180380   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.180389   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:55.180394   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:55.180442   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:55.220216   78126 cri.go:89] found id: ""
	I1011 22:25:55.220243   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.220254   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:55.220261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:55.220323   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:55.255533   78126 cri.go:89] found id: ""
	I1011 22:25:55.255556   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.255564   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:55.255570   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:55.255626   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:55.292316   78126 cri.go:89] found id: ""
	I1011 22:25:55.292348   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.292359   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:55.292366   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:55.292419   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:55.334375   78126 cri.go:89] found id: ""
	I1011 22:25:55.334412   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.334422   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:55.334435   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:55.334494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:55.369564   78126 cri.go:89] found id: ""
	I1011 22:25:55.369595   78126 logs.go:282] 0 containers: []
	W1011 22:25:55.369606   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:55.369617   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:55.369631   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:55.421923   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:55.421959   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:25:55.436413   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:55.436442   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:55.562942   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:55.562962   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:55.562973   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:55.641544   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:55.641576   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:54.456563   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:56.952097   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.143734   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.146350   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:59.148063   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.148608   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:25:58.190744   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:25:58.204070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:25:58.204148   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:25:58.240446   78126 cri.go:89] found id: ""
	I1011 22:25:58.240473   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.240483   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:25:58.240490   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:25:58.240552   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:25:58.279669   78126 cri.go:89] found id: ""
	I1011 22:25:58.279691   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.279699   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:25:58.279704   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:25:58.279763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:25:58.319133   78126 cri.go:89] found id: ""
	I1011 22:25:58.319164   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.319176   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:25:58.319183   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:25:58.319255   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:25:58.363150   78126 cri.go:89] found id: ""
	I1011 22:25:58.363184   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.363197   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:25:58.363204   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:25:58.363267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:25:58.418168   78126 cri.go:89] found id: ""
	I1011 22:25:58.418195   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.418202   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:25:58.418208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:25:58.418266   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:25:58.484143   78126 cri.go:89] found id: ""
	I1011 22:25:58.484171   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.484183   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:25:58.484191   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:25:58.484244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:25:58.534105   78126 cri.go:89] found id: ""
	I1011 22:25:58.534131   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.534139   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:25:58.534145   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:25:58.534198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:25:58.571918   78126 cri.go:89] found id: ""
	I1011 22:25:58.571946   78126 logs.go:282] 0 containers: []
	W1011 22:25:58.571954   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:25:58.571962   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:25:58.571974   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:25:58.661207   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:25:58.661237   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:25:58.661249   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:25:58.739714   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:25:58.739748   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:58.787079   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:25:58.787111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:25:58.841918   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:25:58.841956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.358606   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:01.372604   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:01.372677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:01.410514   78126 cri.go:89] found id: ""
	I1011 22:26:01.410543   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.410553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:01.410568   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:01.410659   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:01.448642   78126 cri.go:89] found id: ""
	I1011 22:26:01.448672   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.448682   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:01.448689   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:01.448752   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:01.486279   78126 cri.go:89] found id: ""
	I1011 22:26:01.486325   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.486333   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:01.486338   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:01.486388   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:01.522123   78126 cri.go:89] found id: ""
	I1011 22:26:01.522157   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.522165   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:01.522172   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:01.522259   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:01.558771   78126 cri.go:89] found id: ""
	I1011 22:26:01.558800   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.558809   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:01.558815   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:01.558874   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:01.596196   78126 cri.go:89] found id: ""
	I1011 22:26:01.596219   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.596227   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:01.596233   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:01.596281   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:01.633408   78126 cri.go:89] found id: ""
	I1011 22:26:01.633432   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.633439   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:01.633444   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:01.633497   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:01.670988   78126 cri.go:89] found id: ""
	I1011 22:26:01.671014   78126 logs.go:282] 0 containers: []
	W1011 22:26:01.671021   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:01.671029   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:01.671038   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:01.723724   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:01.723759   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:01.738130   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:01.738156   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:01.806143   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:01.806172   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:01.806187   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:01.884976   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:01.885022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:25:59.451436   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:01.452136   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.643807   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.644664   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:03.149089   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.152410   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:04.424411   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:04.444762   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:04.444822   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:04.479465   78126 cri.go:89] found id: ""
	I1011 22:26:04.479494   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.479502   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:04.479508   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:04.479557   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:04.514296   78126 cri.go:89] found id: ""
	I1011 22:26:04.514325   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.514335   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:04.514344   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:04.514408   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:04.550226   78126 cri.go:89] found id: ""
	I1011 22:26:04.550256   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.550266   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:04.550273   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:04.550331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:04.584440   78126 cri.go:89] found id: ""
	I1011 22:26:04.584466   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.584475   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:04.584480   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:04.584546   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:04.619216   78126 cri.go:89] found id: ""
	I1011 22:26:04.619245   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.619254   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:04.619261   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:04.619315   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:04.661003   78126 cri.go:89] found id: ""
	I1011 22:26:04.661028   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.661036   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:04.661041   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:04.661097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:04.698582   78126 cri.go:89] found id: ""
	I1011 22:26:04.698609   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.698638   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:04.698646   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:04.698710   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:04.739986   78126 cri.go:89] found id: ""
	I1011 22:26:04.740014   78126 logs.go:282] 0 containers: []
	W1011 22:26:04.740024   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:04.740034   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:04.740047   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:04.821681   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:04.821718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:04.860016   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:04.860041   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:04.912801   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:04.912835   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:04.926816   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:04.926848   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:05.002788   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.503539   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:07.517672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:07.517750   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:07.553676   78126 cri.go:89] found id: ""
	I1011 22:26:07.553710   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.553721   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:07.553729   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:07.553791   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:07.594568   78126 cri.go:89] found id: ""
	I1011 22:26:07.594595   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.594603   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:07.594609   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:07.594679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:07.631127   78126 cri.go:89] found id: ""
	I1011 22:26:07.631153   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.631161   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:07.631166   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:07.631216   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:07.671881   78126 cri.go:89] found id: ""
	I1011 22:26:07.671905   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.671913   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:07.671918   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:07.671963   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:07.713537   78126 cri.go:89] found id: ""
	I1011 22:26:07.713565   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.713573   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:07.713578   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:07.713642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:07.759526   78126 cri.go:89] found id: ""
	I1011 22:26:07.759555   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.759565   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:07.759572   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:07.759628   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:07.797709   78126 cri.go:89] found id: ""
	I1011 22:26:07.797732   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.797740   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:07.797746   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:07.797806   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:07.830989   78126 cri.go:89] found id: ""
	I1011 22:26:07.831020   78126 logs.go:282] 0 containers: []
	W1011 22:26:07.831031   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:07.831041   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:07.831055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:07.881620   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:07.881652   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:07.897542   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:07.897570   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:26:03.952386   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:05.952562   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.645291   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.145051   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.146419   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:07.650259   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.149242   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.149684   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:26:07.969190   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:07.969227   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:07.969242   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.045288   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:08.045321   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.589976   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:10.604705   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:10.604776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:10.640656   78126 cri.go:89] found id: ""
	I1011 22:26:10.640692   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.640707   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:10.640715   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:10.640776   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:10.680632   78126 cri.go:89] found id: ""
	I1011 22:26:10.680658   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.680666   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:10.680680   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:10.680730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:10.718064   78126 cri.go:89] found id: ""
	I1011 22:26:10.718089   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.718097   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:10.718103   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:10.718158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:10.756014   78126 cri.go:89] found id: ""
	I1011 22:26:10.756043   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.756054   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:10.756061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:10.756125   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:10.791304   78126 cri.go:89] found id: ""
	I1011 22:26:10.791330   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.791338   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:10.791343   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:10.791391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:10.828401   78126 cri.go:89] found id: ""
	I1011 22:26:10.828432   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.828444   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:10.828452   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:10.828514   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:10.871459   78126 cri.go:89] found id: ""
	I1011 22:26:10.871500   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.871512   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:10.871520   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:10.871691   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:10.907952   78126 cri.go:89] found id: ""
	I1011 22:26:10.907985   78126 logs.go:282] 0 containers: []
	W1011 22:26:10.907997   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:10.908007   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:10.908022   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:10.953614   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:10.953642   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:11.003264   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:11.003299   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:11.017494   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:11.017522   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:11.086947   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:11.086975   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:11.086989   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:08.452508   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:10.952101   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:12.953125   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.645067   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.646842   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:14.149723   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:16.649874   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:13.664493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:13.678550   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:13.678634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:13.717617   78126 cri.go:89] found id: ""
	I1011 22:26:13.717644   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.717653   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:13.717659   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:13.717723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:13.755330   78126 cri.go:89] found id: ""
	I1011 22:26:13.755362   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.755371   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:13.755378   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:13.755450   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:13.803590   78126 cri.go:89] found id: ""
	I1011 22:26:13.803614   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.803622   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:13.803627   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:13.803683   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:13.838386   78126 cri.go:89] found id: ""
	I1011 22:26:13.838415   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.838423   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:13.838430   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:13.838487   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:13.877314   78126 cri.go:89] found id: ""
	I1011 22:26:13.877343   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.877353   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:13.877360   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:13.877423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:13.915382   78126 cri.go:89] found id: ""
	I1011 22:26:13.915407   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.915415   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:13.915421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:13.915471   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:13.956756   78126 cri.go:89] found id: ""
	I1011 22:26:13.956782   78126 logs.go:282] 0 containers: []
	W1011 22:26:13.956794   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:13.956799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:13.956857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:14.002041   78126 cri.go:89] found id: ""
	I1011 22:26:14.002076   78126 logs.go:282] 0 containers: []
	W1011 22:26:14.002087   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:14.002098   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:14.002113   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:14.084948   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:14.084987   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:14.130428   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:14.130456   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:14.184937   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:14.184981   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:14.199405   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:14.199431   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:14.278685   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:16.778857   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:16.794159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:16.794253   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:16.834729   78126 cri.go:89] found id: ""
	I1011 22:26:16.834755   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.834762   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:16.834768   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:16.834819   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:16.868576   78126 cri.go:89] found id: ""
	I1011 22:26:16.868601   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.868608   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:16.868614   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:16.868672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:16.902809   78126 cri.go:89] found id: ""
	I1011 22:26:16.902835   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.902847   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:16.902854   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:16.902918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:16.937930   78126 cri.go:89] found id: ""
	I1011 22:26:16.937956   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.937966   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:16.937974   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:16.938036   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:16.975067   78126 cri.go:89] found id: ""
	I1011 22:26:16.975095   78126 logs.go:282] 0 containers: []
	W1011 22:26:16.975109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:16.975116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:16.975205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:17.009635   78126 cri.go:89] found id: ""
	I1011 22:26:17.009675   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.009687   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:17.009694   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:17.009758   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:17.049420   78126 cri.go:89] found id: ""
	I1011 22:26:17.049446   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.049454   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:17.049460   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:17.049508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:17.083642   78126 cri.go:89] found id: ""
	I1011 22:26:17.083669   78126 logs.go:282] 0 containers: []
	W1011 22:26:17.083680   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:17.083690   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:17.083704   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:17.158584   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:17.158606   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:17.158638   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:17.241306   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:17.241381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:17.280128   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:17.280162   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:17.332026   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:17.332062   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:15.451781   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:17.951419   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.144547   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.145544   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.151415   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:21.649239   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:19.845784   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:19.858905   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:19.858966   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:19.899434   78126 cri.go:89] found id: ""
	I1011 22:26:19.899459   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.899474   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:19.899480   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:19.899535   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:19.934670   78126 cri.go:89] found id: ""
	I1011 22:26:19.934704   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.934717   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:19.934723   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:19.934785   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:19.974212   78126 cri.go:89] found id: ""
	I1011 22:26:19.974235   78126 logs.go:282] 0 containers: []
	W1011 22:26:19.974242   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:19.974248   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:19.974296   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:20.009143   78126 cri.go:89] found id: ""
	I1011 22:26:20.009169   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.009179   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:20.009186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:20.009252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:20.046729   78126 cri.go:89] found id: ""
	I1011 22:26:20.046755   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.046766   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:20.046773   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:20.046835   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:20.080682   78126 cri.go:89] found id: ""
	I1011 22:26:20.080707   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.080723   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:20.080730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:20.080793   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:20.114889   78126 cri.go:89] found id: ""
	I1011 22:26:20.114916   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.114924   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:20.114930   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:20.114988   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:20.156952   78126 cri.go:89] found id: ""
	I1011 22:26:20.156973   78126 logs.go:282] 0 containers: []
	W1011 22:26:20.156980   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:20.156987   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:20.156998   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:20.209935   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:20.209969   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:20.224675   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:20.224714   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:20.310435   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:20.310457   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:20.310481   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:20.391693   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:20.391734   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:22.930597   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:20.450507   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.450680   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:23.643586   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.144617   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:24.149159   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.649041   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:22.944043   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:22.944122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:22.978759   78126 cri.go:89] found id: ""
	I1011 22:26:22.978782   78126 logs.go:282] 0 containers: []
	W1011 22:26:22.978792   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:22.978799   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:22.978868   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:23.012778   78126 cri.go:89] found id: ""
	I1011 22:26:23.012809   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.012821   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:23.012828   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:23.012881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:23.050330   78126 cri.go:89] found id: ""
	I1011 22:26:23.050362   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.050374   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:23.050380   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:23.050443   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:23.088330   78126 cri.go:89] found id: ""
	I1011 22:26:23.088359   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.088368   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:23.088375   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:23.088433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:23.125942   78126 cri.go:89] found id: ""
	I1011 22:26:23.125965   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.125973   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:23.125979   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:23.126025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:23.167557   78126 cri.go:89] found id: ""
	I1011 22:26:23.167588   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.167598   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:23.167606   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:23.167657   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:23.202270   78126 cri.go:89] found id: ""
	I1011 22:26:23.202295   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.202302   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:23.202308   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:23.202367   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:23.238411   78126 cri.go:89] found id: ""
	I1011 22:26:23.238437   78126 logs.go:282] 0 containers: []
	W1011 22:26:23.238444   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:23.238453   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:23.238469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:23.289581   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:23.289614   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:23.303507   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:23.303532   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:23.377834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:23.377858   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:23.377873   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:23.456374   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:23.456419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.002495   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:26.016196   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:26.016267   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:26.050863   78126 cri.go:89] found id: ""
	I1011 22:26:26.050914   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.050926   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:26.050933   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:26.050994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:26.089055   78126 cri.go:89] found id: ""
	I1011 22:26:26.089080   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.089087   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:26.089092   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:26.089163   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:26.124253   78126 cri.go:89] found id: ""
	I1011 22:26:26.124282   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.124293   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:26.124301   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:26.124356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:26.163228   78126 cri.go:89] found id: ""
	I1011 22:26:26.163257   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.163268   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:26.163276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:26.163338   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:26.200868   78126 cri.go:89] found id: ""
	I1011 22:26:26.200894   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.200902   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:26.200907   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:26.200953   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:26.237210   78126 cri.go:89] found id: ""
	I1011 22:26:26.237239   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.237250   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:26.237258   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:26.237320   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:26.272807   78126 cri.go:89] found id: ""
	I1011 22:26:26.272833   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.272843   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:26.272850   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:26.272911   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:26.308615   78126 cri.go:89] found id: ""
	I1011 22:26:26.308642   78126 logs.go:282] 0 containers: []
	W1011 22:26:26.308652   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:26.308663   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:26.308689   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:26.406605   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:26.406649   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:26.446490   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:26.446516   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:26.502346   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:26.502391   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:26.518985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:26.519012   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:26.592239   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:24.451584   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:26.451685   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.643757   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.143786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:28.650003   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.148367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:29.092719   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:29.106914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:29.106989   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:29.147508   78126 cri.go:89] found id: ""
	I1011 22:26:29.147538   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.147549   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:29.147557   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:29.147617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:29.186161   78126 cri.go:89] found id: ""
	I1011 22:26:29.186185   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.186194   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:29.186200   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:29.186263   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:29.221638   78126 cri.go:89] found id: ""
	I1011 22:26:29.221669   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.221678   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:29.221684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:29.221741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:29.261723   78126 cri.go:89] found id: ""
	I1011 22:26:29.261747   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.261755   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:29.261761   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:29.261818   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:29.295195   78126 cri.go:89] found id: ""
	I1011 22:26:29.295223   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.295234   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:29.295242   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:29.295321   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:29.334482   78126 cri.go:89] found id: ""
	I1011 22:26:29.334517   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.334525   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:29.334532   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:29.334581   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:29.370362   78126 cri.go:89] found id: ""
	I1011 22:26:29.370389   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.370397   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:29.370403   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:29.370449   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:29.407811   78126 cri.go:89] found id: ""
	I1011 22:26:29.407838   78126 logs.go:282] 0 containers: []
	W1011 22:26:29.407845   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:29.407854   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:29.407868   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:29.483970   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:29.483995   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:29.484010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:29.561483   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:29.561519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:29.600438   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:29.600469   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:29.655282   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:29.655315   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.169398   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:32.182799   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:32.182852   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:32.220721   78126 cri.go:89] found id: ""
	I1011 22:26:32.220746   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.220754   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:32.220759   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:32.220802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:32.255544   78126 cri.go:89] found id: ""
	I1011 22:26:32.255587   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.255598   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:32.255605   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:32.255668   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:32.287504   78126 cri.go:89] found id: ""
	I1011 22:26:32.287534   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.287546   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:32.287553   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:32.287605   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:32.321545   78126 cri.go:89] found id: ""
	I1011 22:26:32.321574   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.321584   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:32.321590   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:32.321639   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:32.357047   78126 cri.go:89] found id: ""
	I1011 22:26:32.357070   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.357077   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:32.357082   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:32.357139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:32.391687   78126 cri.go:89] found id: ""
	I1011 22:26:32.391725   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.391736   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:32.391744   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:32.391809   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:32.432144   78126 cri.go:89] found id: ""
	I1011 22:26:32.432170   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.432178   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:32.432185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:32.432248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:32.489417   78126 cri.go:89] found id: ""
	I1011 22:26:32.489449   78126 logs.go:282] 0 containers: []
	W1011 22:26:32.489457   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:32.489465   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:32.489476   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:32.503278   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:32.503303   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:32.572297   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:32.572317   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:32.572332   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:32.652096   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:32.652124   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:32.690883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:32.690910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:28.952410   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:31.450990   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149257   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.644354   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:33.149882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.648376   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.242160   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:35.255276   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:35.255350   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:35.295359   78126 cri.go:89] found id: ""
	I1011 22:26:35.295387   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.295397   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:35.295403   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:35.295472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:35.329199   78126 cri.go:89] found id: ""
	I1011 22:26:35.329223   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.329231   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:35.329236   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:35.329293   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:35.364143   78126 cri.go:89] found id: ""
	I1011 22:26:35.364173   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.364184   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:35.364190   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:35.364250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:35.399090   78126 cri.go:89] found id: ""
	I1011 22:26:35.399119   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.399130   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:35.399137   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:35.399201   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:35.438349   78126 cri.go:89] found id: ""
	I1011 22:26:35.438376   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.438385   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:35.438392   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:35.438457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:35.474003   78126 cri.go:89] found id: ""
	I1011 22:26:35.474031   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.474041   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:35.474048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:35.474115   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:35.512901   78126 cri.go:89] found id: ""
	I1011 22:26:35.512924   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.512932   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:35.512938   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:35.512991   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:35.546589   78126 cri.go:89] found id: ""
	I1011 22:26:35.546623   78126 logs.go:282] 0 containers: []
	W1011 22:26:35.546634   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:35.546647   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:35.546660   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:35.596894   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:35.596926   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:35.610379   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:35.610400   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:35.684356   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:35.684380   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:35.684395   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:35.760006   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:35.760039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:33.951428   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:35.951901   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.143140   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.144224   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:37.649082   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:39.650580   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.148945   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:38.302550   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:38.316840   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:38.316913   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:38.351391   78126 cri.go:89] found id: ""
	I1011 22:26:38.351423   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.351434   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:38.351441   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:38.351521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:38.395844   78126 cri.go:89] found id: ""
	I1011 22:26:38.395882   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.395901   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:38.395908   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:38.395974   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:38.429979   78126 cri.go:89] found id: ""
	I1011 22:26:38.430008   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.430021   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:38.430028   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:38.430077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:38.465942   78126 cri.go:89] found id: ""
	I1011 22:26:38.465969   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.465980   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:38.465987   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:38.466049   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:38.500871   78126 cri.go:89] found id: ""
	I1011 22:26:38.500903   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.500915   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:38.500923   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:38.500978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:38.544644   78126 cri.go:89] found id: ""
	I1011 22:26:38.544670   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.544678   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:38.544684   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:38.544735   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:38.583593   78126 cri.go:89] found id: ""
	I1011 22:26:38.583622   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.583633   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:38.583640   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:38.583695   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:38.627174   78126 cri.go:89] found id: ""
	I1011 22:26:38.627195   78126 logs.go:282] 0 containers: []
	W1011 22:26:38.627203   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:38.627210   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:38.627222   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:38.642008   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:38.642058   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:38.710834   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:38.710859   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:38.710876   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:38.786344   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:38.786377   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.833520   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:38.833543   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.387426   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:41.402456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:41.402523   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:41.442012   78126 cri.go:89] found id: ""
	I1011 22:26:41.442039   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.442049   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:41.442057   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:41.442117   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:41.482806   78126 cri.go:89] found id: ""
	I1011 22:26:41.482832   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.482842   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:41.482849   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:41.482906   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:41.520515   78126 cri.go:89] found id: ""
	I1011 22:26:41.520548   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.520556   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:41.520561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:41.520612   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:41.562498   78126 cri.go:89] found id: ""
	I1011 22:26:41.562523   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.562532   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:41.562540   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:41.562598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:41.600227   78126 cri.go:89] found id: ""
	I1011 22:26:41.600262   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.600275   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:41.600283   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:41.600340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:41.634678   78126 cri.go:89] found id: ""
	I1011 22:26:41.634711   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.634722   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:41.634730   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:41.634786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:41.672127   78126 cri.go:89] found id: ""
	I1011 22:26:41.672160   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.672171   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:41.672182   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:41.672242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:41.714429   78126 cri.go:89] found id: ""
	I1011 22:26:41.714458   78126 logs.go:282] 0 containers: []
	W1011 22:26:41.714477   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:41.714488   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:41.714501   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:41.761489   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:41.761521   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:41.774978   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:41.775005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:41.844152   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:41.844177   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:41.844192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:41.927420   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:41.927468   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:38.451431   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:40.951642   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:42.951753   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.644548   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.144055   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.649705   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.148731   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:44.468634   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:44.482138   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:44.482217   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:44.515869   78126 cri.go:89] found id: ""
	I1011 22:26:44.515899   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.515910   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:44.515918   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:44.515979   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:44.551575   78126 cri.go:89] found id: ""
	I1011 22:26:44.551607   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.551617   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:44.551625   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:44.551689   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:44.602027   78126 cri.go:89] found id: ""
	I1011 22:26:44.602049   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.602059   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:44.602067   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:44.602122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:44.649375   78126 cri.go:89] found id: ""
	I1011 22:26:44.649415   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.649426   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:44.649434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:44.649502   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:44.707061   78126 cri.go:89] found id: ""
	I1011 22:26:44.707093   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.707103   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:44.707110   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:44.707168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:44.745582   78126 cri.go:89] found id: ""
	I1011 22:26:44.745608   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.745615   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:44.745621   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:44.745679   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:44.779358   78126 cri.go:89] found id: ""
	I1011 22:26:44.779389   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.779400   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:44.779406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:44.779480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:44.814177   78126 cri.go:89] found id: ""
	I1011 22:26:44.814201   78126 logs.go:282] 0 containers: []
	W1011 22:26:44.814209   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:44.814217   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:44.814229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.865040   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:44.865071   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:44.878692   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:44.878717   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:44.951946   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:44.951968   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:44.951983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:45.032386   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:45.032426   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:47.575868   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:47.591299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:47.591372   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:47.630396   78126 cri.go:89] found id: ""
	I1011 22:26:47.630419   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.630427   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:47.630432   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:47.630480   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:47.671876   78126 cri.go:89] found id: ""
	I1011 22:26:47.671899   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.671907   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:47.671912   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:47.671998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:47.705199   78126 cri.go:89] found id: ""
	I1011 22:26:47.705226   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.705236   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:47.705243   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:47.705302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:47.738610   78126 cri.go:89] found id: ""
	I1011 22:26:47.738648   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.738659   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:47.738666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:47.738723   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:47.773045   78126 cri.go:89] found id: ""
	I1011 22:26:47.773075   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.773085   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:47.773093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:47.773145   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:47.807617   78126 cri.go:89] found id: ""
	I1011 22:26:47.807643   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.807651   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:47.807657   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:47.807711   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:47.846578   78126 cri.go:89] found id: ""
	I1011 22:26:47.846607   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.846637   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:47.846645   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:47.846706   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:47.885314   78126 cri.go:89] found id: ""
	I1011 22:26:47.885340   78126 logs.go:282] 0 containers: []
	W1011 22:26:47.885351   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:47.885361   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:47.885375   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:44.952282   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.451649   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.643384   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:52.143369   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:49.150143   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.648664   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:47.940590   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:47.940622   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:47.954803   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:47.954827   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:48.023326   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:48.023353   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:48.023366   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:48.106094   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:48.106128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.648633   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:50.662294   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:50.662355   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:50.697197   78126 cri.go:89] found id: ""
	I1011 22:26:50.697234   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.697245   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:50.697252   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:50.697310   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:50.732058   78126 cri.go:89] found id: ""
	I1011 22:26:50.732085   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.732096   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:50.732103   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:50.732158   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:50.766640   78126 cri.go:89] found id: ""
	I1011 22:26:50.766666   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.766676   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:50.766683   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:50.766746   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:50.800039   78126 cri.go:89] found id: ""
	I1011 22:26:50.800063   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.800075   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:50.800081   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:50.800139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:50.834444   78126 cri.go:89] found id: ""
	I1011 22:26:50.834480   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.834489   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:50.834494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:50.834549   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:50.873142   78126 cri.go:89] found id: ""
	I1011 22:26:50.873169   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.873179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:50.873186   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:50.873252   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:50.905966   78126 cri.go:89] found id: ""
	I1011 22:26:50.905989   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.905997   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:50.906002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:50.906059   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:50.940963   78126 cri.go:89] found id: ""
	I1011 22:26:50.940996   78126 logs.go:282] 0 containers: []
	W1011 22:26:50.941005   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:50.941013   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:50.941023   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:50.982334   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:50.982360   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:51.034951   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:51.034984   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:51.049185   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:51.049210   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:51.124893   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:51.124914   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:51.124930   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:49.951912   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:51.955275   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.144438   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.145153   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:54.149232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.648245   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:53.711999   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:53.725494   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:53.725570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:53.760397   78126 cri.go:89] found id: ""
	I1011 22:26:53.760422   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.760433   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:53.760439   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:53.760507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:53.797363   78126 cri.go:89] found id: ""
	I1011 22:26:53.797393   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.797405   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:53.797412   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:53.797482   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:53.832003   78126 cri.go:89] found id: ""
	I1011 22:26:53.832031   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.832042   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:53.832049   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:53.832109   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:53.876580   78126 cri.go:89] found id: ""
	I1011 22:26:53.876604   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.876611   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:53.876618   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:53.876672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:53.911377   78126 cri.go:89] found id: ""
	I1011 22:26:53.911404   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.911414   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:53.911421   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:53.911469   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:53.946674   78126 cri.go:89] found id: ""
	I1011 22:26:53.946703   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.946713   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:53.946728   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:53.946786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:53.984958   78126 cri.go:89] found id: ""
	I1011 22:26:53.984991   78126 logs.go:282] 0 containers: []
	W1011 22:26:53.984999   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:53.985005   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:53.985062   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:54.020130   78126 cri.go:89] found id: ""
	I1011 22:26:54.020153   78126 logs.go:282] 0 containers: []
	W1011 22:26:54.020161   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:54.020168   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:54.020188   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:54.073822   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:54.073856   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:54.088167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:54.088201   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:54.159627   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:54.159656   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:54.159673   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.235740   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:54.235773   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:56.775819   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:56.789305   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:56.789379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:56.826462   78126 cri.go:89] found id: ""
	I1011 22:26:56.826495   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.826506   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:56.826513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:56.826580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:56.860248   78126 cri.go:89] found id: ""
	I1011 22:26:56.860282   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.860291   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:56.860299   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:56.860361   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:56.897673   78126 cri.go:89] found id: ""
	I1011 22:26:56.897706   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.897718   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:56.897725   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:56.897786   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:26:56.932630   78126 cri.go:89] found id: ""
	I1011 22:26:56.932653   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.932660   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:26:56.932666   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:26:56.932720   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:26:56.967360   78126 cri.go:89] found id: ""
	I1011 22:26:56.967387   78126 logs.go:282] 0 containers: []
	W1011 22:26:56.967398   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:26:56.967410   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:26:56.967470   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:26:57.003955   78126 cri.go:89] found id: ""
	I1011 22:26:57.003981   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.003989   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:26:57.003995   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:26:57.004054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:26:57.041635   78126 cri.go:89] found id: ""
	I1011 22:26:57.041669   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.041681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:26:57.041688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:26:57.041755   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:26:57.079951   78126 cri.go:89] found id: ""
	I1011 22:26:57.079974   78126 logs.go:282] 0 containers: []
	W1011 22:26:57.079982   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:26:57.079990   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:26:57.080005   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:26:57.121909   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:26:57.121944   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:26:57.174746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:26:57.174777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:26:57.188029   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:26:57.188059   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:26:57.256272   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:26:57.256294   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:26:57.256308   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:26:54.451964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:56.952084   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:58.643527   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:00.644703   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.148916   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:26:59.843134   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.856411   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:26:59.856481   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:26:59.893903   78126 cri.go:89] found id: ""
	I1011 22:26:59.893934   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.893944   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:26:59.893950   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:26:59.893996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:26:59.930083   78126 cri.go:89] found id: ""
	I1011 22:26:59.930104   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.930112   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:26:59.930117   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:26:59.930168   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:26:59.964892   78126 cri.go:89] found id: ""
	I1011 22:26:59.964926   78126 logs.go:282] 0 containers: []
	W1011 22:26:59.964934   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:26:59.964939   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:26:59.964987   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:00.004437   78126 cri.go:89] found id: ""
	I1011 22:27:00.004461   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.004469   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:00.004475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:00.004531   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:00.040110   78126 cri.go:89] found id: ""
	I1011 22:27:00.040134   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.040141   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:00.040146   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:00.040193   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:00.075895   78126 cri.go:89] found id: ""
	I1011 22:27:00.075922   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.075929   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:00.075935   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:00.075993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:00.109144   78126 cri.go:89] found id: ""
	I1011 22:27:00.109173   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.109182   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:00.109187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:00.109242   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:00.145045   78126 cri.go:89] found id: ""
	I1011 22:27:00.145069   78126 logs.go:282] 0 containers: []
	W1011 22:27:00.145080   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:00.145090   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:00.145102   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:00.197520   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:00.197553   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:00.210668   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:00.210697   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:00.286259   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:00.286281   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:00.286293   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:00.378923   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:00.378956   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:02.918151   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:26:59.452217   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:01.951461   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:03.143621   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:05.644225   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:04.148533   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.149378   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:02.933772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:02.933851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:02.969021   78126 cri.go:89] found id: ""
	I1011 22:27:02.969049   78126 logs.go:282] 0 containers: []
	W1011 22:27:02.969061   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:02.969068   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:02.969129   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:03.004293   78126 cri.go:89] found id: ""
	I1011 22:27:03.004321   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.004332   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:03.004339   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:03.004391   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:03.043602   78126 cri.go:89] found id: ""
	I1011 22:27:03.043647   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.043657   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:03.043664   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:03.043730   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:03.080294   78126 cri.go:89] found id: ""
	I1011 22:27:03.080326   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.080337   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:03.080344   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:03.080404   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:03.115183   78126 cri.go:89] found id: ""
	I1011 22:27:03.115207   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.115221   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:03.115228   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:03.115287   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:03.151516   78126 cri.go:89] found id: ""
	I1011 22:27:03.151538   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.151546   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:03.151551   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:03.151602   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:03.185979   78126 cri.go:89] found id: ""
	I1011 22:27:03.186002   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.186010   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:03.186016   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:03.186061   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:03.221602   78126 cri.go:89] found id: ""
	I1011 22:27:03.221630   78126 logs.go:282] 0 containers: []
	W1011 22:27:03.221643   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:03.221651   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:03.221661   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:03.234303   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:03.234329   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:03.309647   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:03.309674   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:03.309693   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:03.389550   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:03.389585   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:03.428021   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:03.428049   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:05.985199   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:05.998345   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:05.998406   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:06.032473   78126 cri.go:89] found id: ""
	I1011 22:27:06.032499   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.032508   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:06.032513   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:06.032570   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:06.065599   78126 cri.go:89] found id: ""
	I1011 22:27:06.065623   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.065631   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:06.065636   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:06.065694   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:06.103138   78126 cri.go:89] found id: ""
	I1011 22:27:06.103162   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.103169   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:06.103174   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:06.103231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:06.140336   78126 cri.go:89] found id: ""
	I1011 22:27:06.140364   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.140374   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:06.140381   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:06.140441   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:06.175678   78126 cri.go:89] found id: ""
	I1011 22:27:06.175710   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.175721   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:06.175729   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:06.175783   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:06.211726   78126 cri.go:89] found id: ""
	I1011 22:27:06.211758   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.211769   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:06.211777   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:06.211837   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:06.246680   78126 cri.go:89] found id: ""
	I1011 22:27:06.246708   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.246717   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:06.246724   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:06.246784   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:06.286851   78126 cri.go:89] found id: ""
	I1011 22:27:06.286876   78126 logs.go:282] 0 containers: []
	W1011 22:27:06.286885   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:06.286895   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:06.286910   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:06.300408   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:06.300438   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:06.373774   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:06.373798   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:06.373810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:06.457532   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:06.457565   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:06.498449   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:06.498475   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:03.952598   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:06.451802   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:07.645531   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.144141   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.144739   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:08.648935   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.649185   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:09.058493   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:09.072703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:09.072763   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:09.111746   78126 cri.go:89] found id: ""
	I1011 22:27:09.111775   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.111783   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:09.111788   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:09.111834   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:09.147787   78126 cri.go:89] found id: ""
	I1011 22:27:09.147813   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.147825   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:09.147832   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:09.147886   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:09.181015   78126 cri.go:89] found id: ""
	I1011 22:27:09.181045   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.181054   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:09.181061   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:09.181122   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:09.224780   78126 cri.go:89] found id: ""
	I1011 22:27:09.224805   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.224817   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:09.224824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:09.224888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:09.263791   78126 cri.go:89] found id: ""
	I1011 22:27:09.263811   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.263819   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:09.263824   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:09.263870   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:09.306351   78126 cri.go:89] found id: ""
	I1011 22:27:09.306380   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.306391   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:09.306399   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:09.306494   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:09.343799   78126 cri.go:89] found id: ""
	I1011 22:27:09.343828   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.343840   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:09.343846   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:09.343910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:09.381249   78126 cri.go:89] found id: ""
	I1011 22:27:09.381278   78126 logs.go:282] 0 containers: []
	W1011 22:27:09.381289   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:09.381299   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:09.381313   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:09.461432   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:09.461464   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:09.506658   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:09.506687   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:09.560608   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:09.560653   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:09.575010   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:09.575037   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:09.656455   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.157319   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:12.172486   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:12.172559   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:12.207518   78126 cri.go:89] found id: ""
	I1011 22:27:12.207546   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.207553   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:12.207558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:12.207606   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:12.243452   78126 cri.go:89] found id: ""
	I1011 22:27:12.243494   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.243501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:12.243508   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:12.243567   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:12.278869   78126 cri.go:89] found id: ""
	I1011 22:27:12.278894   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.278902   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:12.278908   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:12.278952   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:12.314427   78126 cri.go:89] found id: ""
	I1011 22:27:12.314456   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.314474   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:12.314481   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:12.314547   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:12.349328   78126 cri.go:89] found id: ""
	I1011 22:27:12.349354   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.349365   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:12.349372   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:12.349432   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:12.384140   78126 cri.go:89] found id: ""
	I1011 22:27:12.384171   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.384179   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:12.384185   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:12.384248   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:12.417971   78126 cri.go:89] found id: ""
	I1011 22:27:12.418001   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.418011   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:12.418017   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:12.418073   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:12.455349   78126 cri.go:89] found id: ""
	I1011 22:27:12.455377   78126 logs.go:282] 0 containers: []
	W1011 22:27:12.455388   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:12.455397   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:12.455411   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:12.468825   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:12.468851   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:12.539175   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:12.539197   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:12.539209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:12.619396   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:12.619427   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:12.660972   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:12.661000   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:08.951257   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:10.951915   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:13.451012   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:14.643844   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:16.643951   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:12.651766   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.148176   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.148231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:15.216343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:15.229169   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:15.229227   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:15.265187   78126 cri.go:89] found id: ""
	I1011 22:27:15.265217   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.265225   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:15.265231   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:15.265276   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:15.298894   78126 cri.go:89] found id: ""
	I1011 22:27:15.298926   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.298939   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:15.298948   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:15.299054   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:15.333512   78126 cri.go:89] found id: ""
	I1011 22:27:15.333543   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.333554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:15.333561   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:15.333620   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:15.365674   78126 cri.go:89] found id: ""
	I1011 22:27:15.365704   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.365714   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:15.365721   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:15.365779   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:15.398504   78126 cri.go:89] found id: ""
	I1011 22:27:15.398528   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.398536   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:15.398541   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:15.398588   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:15.432808   78126 cri.go:89] found id: ""
	I1011 22:27:15.432836   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.432848   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:15.432856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:15.432918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:15.468985   78126 cri.go:89] found id: ""
	I1011 22:27:15.469014   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.469024   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:15.469031   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:15.469090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:15.502897   78126 cri.go:89] found id: ""
	I1011 22:27:15.502929   78126 logs.go:282] 0 containers: []
	W1011 22:27:15.502941   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:15.502952   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:15.502963   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:15.582686   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:15.582723   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:15.625983   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:15.626017   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:15.678285   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:15.678328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:15.693115   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:15.693142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:15.763082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:15.452119   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:17.951679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.144439   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.644786   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:19.647581   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.649450   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:18.264038   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:18.277159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:18.277244   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:18.312400   78126 cri.go:89] found id: ""
	I1011 22:27:18.312427   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.312436   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:18.312446   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:18.312508   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:18.343872   78126 cri.go:89] found id: ""
	I1011 22:27:18.343901   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.343913   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:18.343920   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:18.343983   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:18.384468   78126 cri.go:89] found id: ""
	I1011 22:27:18.384505   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.384516   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:18.384523   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:18.384586   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:18.424914   78126 cri.go:89] found id: ""
	I1011 22:27:18.424942   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.424953   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:18.424960   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:18.425018   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:18.480715   78126 cri.go:89] found id: ""
	I1011 22:27:18.480749   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.480760   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:18.480769   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:18.480830   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:18.516382   78126 cri.go:89] found id: ""
	I1011 22:27:18.516418   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.516428   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:18.516434   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:18.516488   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:18.553279   78126 cri.go:89] found id: ""
	I1011 22:27:18.553308   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.553319   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:18.553326   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:18.553392   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:18.594545   78126 cri.go:89] found id: ""
	I1011 22:27:18.594574   78126 logs.go:282] 0 containers: []
	W1011 22:27:18.594583   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:18.594592   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:18.594603   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:18.673894   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:18.673933   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:18.715324   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:18.715354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:18.768704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:18.768738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:18.783065   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:18.783091   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:18.858255   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.358677   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:21.372080   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:21.372147   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:21.407613   78126 cri.go:89] found id: ""
	I1011 22:27:21.407637   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.407644   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:21.407650   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:21.407707   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:21.442694   78126 cri.go:89] found id: ""
	I1011 22:27:21.442722   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.442732   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:21.442739   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:21.442800   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:21.475468   78126 cri.go:89] found id: ""
	I1011 22:27:21.475498   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.475507   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:21.475513   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:21.475560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:21.511497   78126 cri.go:89] found id: ""
	I1011 22:27:21.511521   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.511528   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:21.511534   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:21.511593   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:21.549089   78126 cri.go:89] found id: ""
	I1011 22:27:21.549114   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.549123   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:21.549130   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:21.549179   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:21.585605   78126 cri.go:89] found id: ""
	I1011 22:27:21.585636   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.585647   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:21.585654   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:21.585709   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:21.620422   78126 cri.go:89] found id: ""
	I1011 22:27:21.620453   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.620463   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:21.620473   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:21.620521   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:21.657288   78126 cri.go:89] found id: ""
	I1011 22:27:21.657314   78126 logs.go:282] 0 containers: []
	W1011 22:27:21.657331   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:21.657340   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:21.657354   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:21.671121   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:21.671148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:21.744707   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:21.744727   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:21.744738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:21.821935   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:21.821971   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:21.863498   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:21.863525   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:19.952158   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:21.952425   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.143206   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.143587   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.148823   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:24.417344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:24.431704   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:24.431771   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:24.469477   78126 cri.go:89] found id: ""
	I1011 22:27:24.469506   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.469517   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:24.469524   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:24.469587   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:24.507271   78126 cri.go:89] found id: ""
	I1011 22:27:24.507301   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.507312   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:24.507319   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:24.507381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:24.542887   78126 cri.go:89] found id: ""
	I1011 22:27:24.542912   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.542922   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:24.542929   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:24.542997   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:24.575914   78126 cri.go:89] found id: ""
	I1011 22:27:24.575940   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.575948   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:24.575954   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:24.576021   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:24.616753   78126 cri.go:89] found id: ""
	I1011 22:27:24.616775   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.616784   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:24.616792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:24.616851   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:24.654415   78126 cri.go:89] found id: ""
	I1011 22:27:24.654440   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.654449   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:24.654455   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:24.654519   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:24.688047   78126 cri.go:89] found id: ""
	I1011 22:27:24.688073   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.688083   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:24.688088   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:24.688135   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:24.724944   78126 cri.go:89] found id: ""
	I1011 22:27:24.724970   78126 logs.go:282] 0 containers: []
	W1011 22:27:24.724981   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:24.724990   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:24.725003   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:24.775805   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:24.775841   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:24.790906   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:24.790935   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:24.868036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:24.868057   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:24.868073   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:24.957662   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:24.957692   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.502035   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:27.516397   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:27.516477   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:27.551151   78126 cri.go:89] found id: ""
	I1011 22:27:27.551192   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.551204   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:27.551211   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:27.551269   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:27.586218   78126 cri.go:89] found id: ""
	I1011 22:27:27.586245   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.586257   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:27.586265   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:27.586326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:27.620435   78126 cri.go:89] found id: ""
	I1011 22:27:27.620464   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.620475   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:27.620483   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:27.620540   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:27.656548   78126 cri.go:89] found id: ""
	I1011 22:27:27.656576   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.656586   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:27.656592   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:27.656650   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:27.690598   78126 cri.go:89] found id: ""
	I1011 22:27:27.690644   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.690654   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:27.690661   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:27.690725   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:27.724265   78126 cri.go:89] found id: ""
	I1011 22:27:27.724293   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.724304   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:27.724312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:27.724379   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:27.758660   78126 cri.go:89] found id: ""
	I1011 22:27:27.758683   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.758691   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:27.758696   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:27.758748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:27.794463   78126 cri.go:89] found id: ""
	I1011 22:27:27.794493   78126 logs.go:282] 0 containers: []
	W1011 22:27:27.794501   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:27.794510   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:27.794523   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:27.832682   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:27.832706   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:27.884728   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:27.884764   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:27.901043   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:27.901077   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 22:27:24.452366   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:26.950804   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:28.143916   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:30.644830   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:29.149277   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.648385   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	W1011 22:27:27.973066   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:27.973091   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:27.973111   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:30.554002   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:30.567270   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:30.567329   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:30.603976   78126 cri.go:89] found id: ""
	I1011 22:27:30.604012   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.604024   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:30.604031   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:30.604097   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:30.655993   78126 cri.go:89] found id: ""
	I1011 22:27:30.656013   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.656020   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:30.656026   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:30.656074   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:30.708194   78126 cri.go:89] found id: ""
	I1011 22:27:30.708221   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.708233   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:30.708240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:30.708300   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:30.758439   78126 cri.go:89] found id: ""
	I1011 22:27:30.758465   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.758476   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:30.758484   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:30.758550   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:30.792783   78126 cri.go:89] found id: ""
	I1011 22:27:30.792810   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.792821   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:30.792829   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:30.792888   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:30.830099   78126 cri.go:89] found id: ""
	I1011 22:27:30.830125   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.830136   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:30.830144   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:30.830203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:30.866139   78126 cri.go:89] found id: ""
	I1011 22:27:30.866164   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.866173   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:30.866178   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:30.866231   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:30.902753   78126 cri.go:89] found id: ""
	I1011 22:27:30.902776   78126 logs.go:282] 0 containers: []
	W1011 22:27:30.902783   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:30.902791   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:30.902800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:30.938918   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:30.938942   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:30.991300   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:30.991328   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:31.006433   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:31.006459   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:31.083214   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:31.083241   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:31.083256   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:28.952135   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:31.452143   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.143604   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:35.149383   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.649481   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.148545   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:33.667213   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:33.680441   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:33.680513   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:33.716530   78126 cri.go:89] found id: ""
	I1011 22:27:33.716557   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.716569   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:33.716576   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:33.716648   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:33.750344   78126 cri.go:89] found id: ""
	I1011 22:27:33.750373   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.750385   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:33.750392   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:33.750457   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:33.789084   78126 cri.go:89] found id: ""
	I1011 22:27:33.789120   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.789133   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:33.789148   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:33.789211   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:33.823518   78126 cri.go:89] found id: ""
	I1011 22:27:33.823544   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.823553   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:33.823560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:33.823625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:33.855768   78126 cri.go:89] found id: ""
	I1011 22:27:33.855795   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.855805   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:33.855813   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:33.855867   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:33.888937   78126 cri.go:89] found id: ""
	I1011 22:27:33.888962   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.888969   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:33.888975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:33.889044   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:33.920360   78126 cri.go:89] found id: ""
	I1011 22:27:33.920387   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.920398   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:33.920406   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:33.920463   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:33.954043   78126 cri.go:89] found id: ""
	I1011 22:27:33.954063   78126 logs.go:282] 0 containers: []
	W1011 22:27:33.954070   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:33.954077   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:33.954088   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:34.005176   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:34.005206   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:34.020624   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:34.020648   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:34.087140   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:34.087164   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:34.087179   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:34.174148   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:34.174186   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:36.715607   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:36.728610   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:36.728677   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:36.762739   78126 cri.go:89] found id: ""
	I1011 22:27:36.762768   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.762778   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:36.762785   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:36.762855   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:36.804187   78126 cri.go:89] found id: ""
	I1011 22:27:36.804218   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.804228   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:36.804242   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:36.804311   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:36.837216   78126 cri.go:89] found id: ""
	I1011 22:27:36.837245   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.837258   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:36.837265   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:36.837326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:36.876872   78126 cri.go:89] found id: ""
	I1011 22:27:36.876897   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.876907   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:36.876914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:36.876973   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:36.910111   78126 cri.go:89] found id: ""
	I1011 22:27:36.910139   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.910150   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:36.910158   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:36.910205   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:36.944055   78126 cri.go:89] found id: ""
	I1011 22:27:36.944087   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.944098   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:36.944106   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:36.944167   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:36.981371   78126 cri.go:89] found id: ""
	I1011 22:27:36.981400   78126 logs.go:282] 0 containers: []
	W1011 22:27:36.981411   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:36.981418   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:36.981475   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:37.013924   78126 cri.go:89] found id: ""
	I1011 22:27:37.013946   78126 logs.go:282] 0 containers: []
	W1011 22:27:37.013953   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:37.013961   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:37.013977   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:37.086294   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:37.086321   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:37.086339   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:37.162891   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:37.162928   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:37.208234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:37.208263   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:37.260746   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:37.260777   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:33.951885   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:36.450920   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:37.643707   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.644162   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.143479   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:38.649090   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:41.148009   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:39.774712   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:39.788149   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:39.788234   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:39.821247   78126 cri.go:89] found id: ""
	I1011 22:27:39.821272   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.821280   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:39.821285   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:39.821334   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:39.855266   78126 cri.go:89] found id: ""
	I1011 22:27:39.855293   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.855304   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:39.855310   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:39.855370   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:39.889208   78126 cri.go:89] found id: ""
	I1011 22:27:39.889238   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.889249   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:39.889256   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:39.889314   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:39.922228   78126 cri.go:89] found id: ""
	I1011 22:27:39.922254   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.922264   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:39.922271   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:39.922331   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:39.959873   78126 cri.go:89] found id: ""
	I1011 22:27:39.959900   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.959913   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:39.959919   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:39.959980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:39.995821   78126 cri.go:89] found id: ""
	I1011 22:27:39.995845   78126 logs.go:282] 0 containers: []
	W1011 22:27:39.995852   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:39.995859   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:39.995919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:40.038481   78126 cri.go:89] found id: ""
	I1011 22:27:40.038507   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.038516   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:40.038530   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:40.038590   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:40.076458   78126 cri.go:89] found id: ""
	I1011 22:27:40.076485   78126 logs.go:282] 0 containers: []
	W1011 22:27:40.076499   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:40.076509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:40.076524   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:40.149036   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:40.149059   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:40.149074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:40.226651   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:40.226685   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:40.267502   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:40.267534   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:40.317704   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:40.317738   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:42.832811   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:42.845675   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:42.845744   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:42.878414   78126 cri.go:89] found id: ""
	I1011 22:27:42.878436   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.878444   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:42.878449   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:42.878499   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:42.911271   78126 cri.go:89] found id: ""
	I1011 22:27:42.911304   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.911314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:42.911321   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:42.911381   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:38.451524   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:40.954861   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:44.143555   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:46.143976   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:43.149295   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.648165   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:42.945568   78126 cri.go:89] found id: ""
	I1011 22:27:42.945594   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.945602   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:42.945608   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:42.945652   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:42.982582   78126 cri.go:89] found id: ""
	I1011 22:27:42.982611   78126 logs.go:282] 0 containers: []
	W1011 22:27:42.982640   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:42.982647   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:42.982712   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:43.018247   78126 cri.go:89] found id: ""
	I1011 22:27:43.018274   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.018285   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:43.018292   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:43.018352   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:43.057424   78126 cri.go:89] found id: ""
	I1011 22:27:43.057444   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.057451   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:43.057456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:43.057518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:43.091590   78126 cri.go:89] found id: ""
	I1011 22:27:43.091611   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.091624   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:43.091630   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:43.091684   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:43.125292   78126 cri.go:89] found id: ""
	I1011 22:27:43.125319   78126 logs.go:282] 0 containers: []
	W1011 22:27:43.125328   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:43.125336   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:43.125346   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:43.138720   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:43.138755   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:43.205369   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.205396   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:43.205412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:43.285157   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:43.285192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:43.329180   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:43.329212   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:45.879364   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:45.893784   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:45.893857   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:45.925785   78126 cri.go:89] found id: ""
	I1011 22:27:45.925816   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.925826   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:45.925834   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:45.925890   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:45.962537   78126 cri.go:89] found id: ""
	I1011 22:27:45.962565   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.962576   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:45.962583   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:45.962654   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:45.997549   78126 cri.go:89] found id: ""
	I1011 22:27:45.997581   78126 logs.go:282] 0 containers: []
	W1011 22:27:45.997592   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:45.997600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:45.997663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:46.031517   78126 cri.go:89] found id: ""
	I1011 22:27:46.031547   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.031559   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:46.031566   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:46.031625   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:46.066502   78126 cri.go:89] found id: ""
	I1011 22:27:46.066524   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.066535   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:46.066542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:46.066600   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:46.099880   78126 cri.go:89] found id: ""
	I1011 22:27:46.099912   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.099920   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:46.099926   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:46.099986   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:46.138431   78126 cri.go:89] found id: ""
	I1011 22:27:46.138457   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.138468   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:46.138474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:46.138530   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:46.174468   78126 cri.go:89] found id: ""
	I1011 22:27:46.174494   78126 logs.go:282] 0 containers: []
	W1011 22:27:46.174504   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:46.174513   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:46.174526   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:46.251802   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:46.251838   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:46.293166   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:46.293196   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:46.353094   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:46.353128   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:46.367194   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:46.367232   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:46.437505   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:43.451177   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:45.451493   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.951335   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.145191   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.643798   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:47.648963   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:50.150518   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:48.938070   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:48.952267   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:48.952337   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:48.989401   78126 cri.go:89] found id: ""
	I1011 22:27:48.989431   78126 logs.go:282] 0 containers: []
	W1011 22:27:48.989439   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:48.989445   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:48.989507   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:49.026149   78126 cri.go:89] found id: ""
	I1011 22:27:49.026178   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.026189   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:49.026197   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:49.026262   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:49.058395   78126 cri.go:89] found id: ""
	I1011 22:27:49.058428   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.058442   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:49.058450   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:49.058518   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:49.091235   78126 cri.go:89] found id: ""
	I1011 22:27:49.091271   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.091281   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:49.091289   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:49.091345   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:49.124798   78126 cri.go:89] found id: ""
	I1011 22:27:49.124833   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.124845   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:49.124852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:49.124910   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:49.160166   78126 cri.go:89] found id: ""
	I1011 22:27:49.160193   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.160202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:49.160208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:49.160264   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:49.195057   78126 cri.go:89] found id: ""
	I1011 22:27:49.195092   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.195104   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:49.195113   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:49.195170   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:49.228857   78126 cri.go:89] found id: ""
	I1011 22:27:49.228883   78126 logs.go:282] 0 containers: []
	W1011 22:27:49.228900   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:49.228908   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:49.228919   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:49.282560   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:49.282595   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:49.296274   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:49.296302   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:49.374042   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.374061   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:49.374074   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:49.453465   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:49.453495   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:51.995178   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:52.008287   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:52.008346   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:52.040123   78126 cri.go:89] found id: ""
	I1011 22:27:52.040151   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.040162   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:52.040169   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:52.040243   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:52.076602   78126 cri.go:89] found id: ""
	I1011 22:27:52.076642   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.076651   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:52.076656   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:52.076704   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:52.112997   78126 cri.go:89] found id: ""
	I1011 22:27:52.113030   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.113041   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:52.113048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:52.113112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:52.155861   78126 cri.go:89] found id: ""
	I1011 22:27:52.155884   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.155890   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:52.155896   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:52.155951   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:52.192649   78126 cri.go:89] found id: ""
	I1011 22:27:52.192678   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.192693   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:52.192701   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:52.192766   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:52.228147   78126 cri.go:89] found id: ""
	I1011 22:27:52.228173   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.228181   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:52.228187   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:52.228254   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:52.260360   78126 cri.go:89] found id: ""
	I1011 22:27:52.260385   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.260395   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:52.260401   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:52.260472   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:52.292356   78126 cri.go:89] found id: ""
	I1011 22:27:52.292379   78126 logs.go:282] 0 containers: []
	W1011 22:27:52.292387   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:52.292394   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:52.292406   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:52.373085   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:52.373118   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:52.411136   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:52.411191   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:52.465860   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:52.465888   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:52.479834   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:52.479859   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:52.551187   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:49.951782   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.451312   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:53.143194   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.143896   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.144275   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:52.647882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:54.648946   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:56.649832   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:55.051541   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:55.064703   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:55.064802   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:55.100312   78126 cri.go:89] found id: ""
	I1011 22:27:55.100345   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.100355   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:55.100362   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:55.100425   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:55.136279   78126 cri.go:89] found id: ""
	I1011 22:27:55.136305   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.136314   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:55.136320   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:55.136384   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:55.176236   78126 cri.go:89] found id: ""
	I1011 22:27:55.176271   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.176283   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:55.176291   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:55.176354   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:55.211989   78126 cri.go:89] found id: ""
	I1011 22:27:55.212014   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.212021   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:55.212026   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:55.212083   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:55.249907   78126 cri.go:89] found id: ""
	I1011 22:27:55.249934   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.249943   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:55.249948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:55.249994   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:55.286872   78126 cri.go:89] found id: ""
	I1011 22:27:55.286900   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.286911   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:55.286922   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:55.286980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:55.324995   78126 cri.go:89] found id: ""
	I1011 22:27:55.325018   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.325028   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:55.325036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:55.325090   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:55.365065   78126 cri.go:89] found id: ""
	I1011 22:27:55.365093   78126 logs.go:282] 0 containers: []
	W1011 22:27:55.365105   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:55.365117   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:55.365130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:55.404412   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:55.404445   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:55.457791   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:55.457823   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:55.473549   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:55.473578   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:55.546680   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:55.546707   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:55.546722   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:54.951866   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:57.450974   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.144335   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.144508   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:59.148539   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.652535   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:27:58.124833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:27:58.137772   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:27:58.137846   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:27:58.176195   78126 cri.go:89] found id: ""
	I1011 22:27:58.176220   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.176229   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:27:58.176237   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:27:58.176297   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:27:58.208809   78126 cri.go:89] found id: ""
	I1011 22:27:58.208839   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.208850   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:27:58.208858   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:27:58.208919   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:27:58.242000   78126 cri.go:89] found id: ""
	I1011 22:27:58.242022   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.242029   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:27:58.242035   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:27:58.242080   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:27:58.274390   78126 cri.go:89] found id: ""
	I1011 22:27:58.274425   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.274446   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:27:58.274456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:27:58.274515   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:27:58.306295   78126 cri.go:89] found id: ""
	I1011 22:27:58.306318   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.306325   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:27:58.306330   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:27:58.306382   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:27:58.340483   78126 cri.go:89] found id: ""
	I1011 22:27:58.340509   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.340517   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:27:58.340525   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:27:58.340580   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:27:58.376269   78126 cri.go:89] found id: ""
	I1011 22:27:58.376293   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.376310   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:27:58.376322   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:27:58.376378   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:27:58.411669   78126 cri.go:89] found id: ""
	I1011 22:27:58.411697   78126 logs.go:282] 0 containers: []
	W1011 22:27:58.411708   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:27:58.411718   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:27:58.411729   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:27:58.467963   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:27:58.467993   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:27:58.482581   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:27:58.482607   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:27:58.547466   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:27:58.547495   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:27:58.547509   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:27:58.633069   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:27:58.633107   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:01.179269   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:01.193832   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:01.193896   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:01.228563   78126 cri.go:89] found id: ""
	I1011 22:28:01.228594   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.228605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:01.228612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:01.228676   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:01.263146   78126 cri.go:89] found id: ""
	I1011 22:28:01.263189   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.263200   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:01.263207   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:01.263275   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:01.299271   78126 cri.go:89] found id: ""
	I1011 22:28:01.299297   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.299304   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:01.299310   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:01.299360   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:01.335795   78126 cri.go:89] found id: ""
	I1011 22:28:01.335820   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.335828   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:01.335834   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:01.335881   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:01.371325   78126 cri.go:89] found id: ""
	I1011 22:28:01.371350   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.371358   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:01.371364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:01.371423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:01.405937   78126 cri.go:89] found id: ""
	I1011 22:28:01.405972   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.405983   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:01.405990   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:01.406053   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:01.441566   78126 cri.go:89] found id: ""
	I1011 22:28:01.441599   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.441607   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:01.441615   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:01.441678   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:01.477890   78126 cri.go:89] found id: ""
	I1011 22:28:01.477914   78126 logs.go:282] 0 containers: []
	W1011 22:28:01.477921   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:01.477932   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:01.477943   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:01.528376   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:01.528414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:01.542387   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:01.542412   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:01.616964   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:01.616994   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:01.617008   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:01.697175   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:01.697217   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:27:59.452019   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:01.951319   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:03.643904   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.142780   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.149856   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:06.649036   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:04.254008   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:04.267364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:04.267423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:04.301588   78126 cri.go:89] found id: ""
	I1011 22:28:04.301613   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.301621   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:04.301627   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:04.301674   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:04.337466   78126 cri.go:89] found id: ""
	I1011 22:28:04.337489   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.337497   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:04.337503   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:04.337562   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:04.375440   78126 cri.go:89] found id: ""
	I1011 22:28:04.375462   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.375470   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:04.375475   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:04.375528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:04.408195   78126 cri.go:89] found id: ""
	I1011 22:28:04.408223   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.408233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:04.408240   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:04.408302   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:04.446375   78126 cri.go:89] found id: ""
	I1011 22:28:04.446408   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.446420   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:04.446429   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:04.446496   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:04.484039   78126 cri.go:89] found id: ""
	I1011 22:28:04.484062   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.484070   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:04.484076   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:04.484128   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:04.521534   78126 cri.go:89] found id: ""
	I1011 22:28:04.521563   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.521574   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:04.521581   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:04.521642   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:04.556088   78126 cri.go:89] found id: ""
	I1011 22:28:04.556116   78126 logs.go:282] 0 containers: []
	W1011 22:28:04.556127   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:04.556137   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:04.556152   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:04.636039   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:04.636066   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:04.636081   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:04.716003   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:04.716046   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:04.760793   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:04.760817   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:04.815224   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:04.815267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.328945   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:07.341928   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:07.342003   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:07.379521   78126 cri.go:89] found id: ""
	I1011 22:28:07.379542   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.379550   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:07.379558   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:07.379618   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:07.416342   78126 cri.go:89] found id: ""
	I1011 22:28:07.416366   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.416374   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:07.416380   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:07.416429   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:07.453127   78126 cri.go:89] found id: ""
	I1011 22:28:07.453147   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.453153   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:07.453159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:07.453204   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:07.488730   78126 cri.go:89] found id: ""
	I1011 22:28:07.488758   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.488768   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:07.488776   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:07.488828   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:07.523909   78126 cri.go:89] found id: ""
	I1011 22:28:07.523932   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.523940   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:07.523945   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:07.523993   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:07.559330   78126 cri.go:89] found id: ""
	I1011 22:28:07.559362   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.559373   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:07.559382   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:07.559447   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:07.599575   78126 cri.go:89] found id: ""
	I1011 22:28:07.599603   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.599611   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:07.599617   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:07.599664   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:07.633510   78126 cri.go:89] found id: ""
	I1011 22:28:07.633535   78126 logs.go:282] 0 containers: []
	W1011 22:28:07.633543   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:07.633551   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:07.633562   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:07.648120   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:07.648143   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:07.715471   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:07.715498   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:07.715513   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:07.793863   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:07.793897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:07.834167   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:07.834209   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:03.951539   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:05.955152   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.450679   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.143240   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.144659   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:08.649122   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:11.148403   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:10.391116   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:10.404914   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:10.404980   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:10.458345   78126 cri.go:89] found id: ""
	I1011 22:28:10.458364   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.458372   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:10.458377   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:10.458433   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:10.493572   78126 cri.go:89] found id: ""
	I1011 22:28:10.493602   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.493611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:10.493616   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:10.493662   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:10.527115   78126 cri.go:89] found id: ""
	I1011 22:28:10.527140   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.527147   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:10.527153   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:10.527207   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:10.567003   78126 cri.go:89] found id: ""
	I1011 22:28:10.567034   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.567041   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:10.567046   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:10.567107   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:10.602248   78126 cri.go:89] found id: ""
	I1011 22:28:10.602275   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.602284   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:10.602293   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:10.602358   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:10.639215   78126 cri.go:89] found id: ""
	I1011 22:28:10.639246   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.639257   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:10.639264   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:10.639324   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:10.674782   78126 cri.go:89] found id: ""
	I1011 22:28:10.674806   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.674815   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:10.674823   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:10.674885   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:10.710497   78126 cri.go:89] found id: ""
	I1011 22:28:10.710523   78126 logs.go:282] 0 containers: []
	W1011 22:28:10.710531   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:10.710540   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:10.710555   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:10.723650   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:10.723674   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:10.792972   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:10.792996   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:10.793011   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:10.872705   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:10.872739   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:10.915460   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:10.915484   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:10.451221   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.952631   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:12.644135   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.143192   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.144402   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.148449   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:15.648534   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:13.468845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:13.482856   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:13.482918   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:13.519582   78126 cri.go:89] found id: ""
	I1011 22:28:13.519610   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.519617   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:13.519624   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:13.519688   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:13.553821   78126 cri.go:89] found id: ""
	I1011 22:28:13.553846   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.553854   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:13.553859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:13.553907   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:13.590588   78126 cri.go:89] found id: ""
	I1011 22:28:13.590630   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.590645   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:13.590651   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:13.590700   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:13.624563   78126 cri.go:89] found id: ""
	I1011 22:28:13.624586   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.624594   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:13.624600   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:13.624658   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:13.661454   78126 cri.go:89] found id: ""
	I1011 22:28:13.661483   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.661493   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:13.661500   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:13.661560   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:13.704052   78126 cri.go:89] found id: ""
	I1011 22:28:13.704078   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.704089   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:13.704097   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:13.704153   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:13.741106   78126 cri.go:89] found id: ""
	I1011 22:28:13.741133   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.741142   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:13.741147   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:13.741203   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:13.774225   78126 cri.go:89] found id: ""
	I1011 22:28:13.774259   78126 logs.go:282] 0 containers: []
	W1011 22:28:13.774271   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:13.774281   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:13.774295   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:13.825399   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:13.825432   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:13.838891   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:13.838913   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:13.905111   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:13.905143   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:13.905160   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:13.985008   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:13.985039   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:16.527545   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:16.540038   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:16.540110   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:16.572308   78126 cri.go:89] found id: ""
	I1011 22:28:16.572343   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.572354   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:16.572361   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:16.572420   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:16.605965   78126 cri.go:89] found id: ""
	I1011 22:28:16.605994   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.606004   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:16.606012   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:16.606071   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:16.640191   78126 cri.go:89] found id: ""
	I1011 22:28:16.640225   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.640232   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:16.640237   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:16.640289   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:16.674898   78126 cri.go:89] found id: ""
	I1011 22:28:16.674923   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.674950   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:16.674957   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:16.675013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:16.712297   78126 cri.go:89] found id: ""
	I1011 22:28:16.712324   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.712332   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:16.712337   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:16.712412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:16.748691   78126 cri.go:89] found id: ""
	I1011 22:28:16.748718   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.748728   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:16.748735   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:16.748797   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:16.787388   78126 cri.go:89] found id: ""
	I1011 22:28:16.787415   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.787426   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:16.787433   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:16.787505   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:16.825123   78126 cri.go:89] found id: ""
	I1011 22:28:16.825149   78126 logs.go:282] 0 containers: []
	W1011 22:28:16.825157   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:16.825165   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:16.825176   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:16.848287   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:16.848326   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:16.952382   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:16.952401   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:16.952414   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:17.036001   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:17.036036   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:17.076340   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:17.076374   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:15.450809   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:17.451351   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.644591   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.144568   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:18.147818   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:20.150891   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:19.629958   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:19.644557   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:19.644621   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:19.680885   78126 cri.go:89] found id: ""
	I1011 22:28:19.680910   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.680917   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:19.680923   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:19.680978   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:19.716061   78126 cri.go:89] found id: ""
	I1011 22:28:19.716084   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.716091   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:19.716096   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:19.716155   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:19.750059   78126 cri.go:89] found id: ""
	I1011 22:28:19.750096   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.750107   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:19.750114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:19.750172   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:19.784737   78126 cri.go:89] found id: ""
	I1011 22:28:19.784764   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.784776   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:19.784783   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:19.784847   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:19.816838   78126 cri.go:89] found id: ""
	I1011 22:28:19.816860   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.816867   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:19.816873   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:19.816935   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:19.851344   78126 cri.go:89] found id: ""
	I1011 22:28:19.851371   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.851381   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:19.851387   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:19.851451   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.888056   78126 cri.go:89] found id: ""
	I1011 22:28:19.888078   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.888086   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:19.888093   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:19.888160   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:19.922218   78126 cri.go:89] found id: ""
	I1011 22:28:19.922240   78126 logs.go:282] 0 containers: []
	W1011 22:28:19.922249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:19.922256   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:19.922268   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:19.936500   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:19.936527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:20.003082   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:20.003116   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:20.003130   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:20.083377   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:20.083419   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:20.126062   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:20.126093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:22.681603   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:22.695159   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:22.695226   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:22.728478   78126 cri.go:89] found id: ""
	I1011 22:28:22.728520   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.728542   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:22.728549   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:22.728604   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:22.763463   78126 cri.go:89] found id: ""
	I1011 22:28:22.763493   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.763501   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:22.763506   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:22.763565   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:22.796506   78126 cri.go:89] found id: ""
	I1011 22:28:22.796533   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.796540   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:22.796545   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:22.796598   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:22.830075   78126 cri.go:89] found id: ""
	I1011 22:28:22.830101   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.830110   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:22.830119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:22.830166   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:22.866554   78126 cri.go:89] found id: ""
	I1011 22:28:22.866578   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.866586   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:22.866594   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:22.866672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:22.901167   78126 cri.go:89] found id: ""
	I1011 22:28:22.901195   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.901202   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:22.901208   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:22.901258   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:19.951122   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:21.951323   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.643512   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:27.143639   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.648660   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:24.648755   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.648851   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:22.934748   78126 cri.go:89] found id: ""
	I1011 22:28:22.934775   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.934784   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:22.934792   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:22.934850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:22.969467   78126 cri.go:89] found id: ""
	I1011 22:28:22.969492   78126 logs.go:282] 0 containers: []
	W1011 22:28:22.969500   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:22.969509   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:22.969519   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:23.037762   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:23.037783   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:23.037798   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:23.114806   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:23.114839   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:23.155199   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:23.155229   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:23.206641   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:23.206678   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:25.721052   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:25.735439   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:25.735512   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:25.771904   78126 cri.go:89] found id: ""
	I1011 22:28:25.771929   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.771936   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:25.771943   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:25.771996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:25.810964   78126 cri.go:89] found id: ""
	I1011 22:28:25.810995   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.811006   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:25.811014   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:25.811077   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:25.845916   78126 cri.go:89] found id: ""
	I1011 22:28:25.845948   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.845959   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:25.845966   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:25.846025   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:25.880112   78126 cri.go:89] found id: ""
	I1011 22:28:25.880137   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.880145   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:25.880151   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:25.880198   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:25.916515   78126 cri.go:89] found id: ""
	I1011 22:28:25.916542   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.916550   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:25.916556   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:25.916608   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:25.954714   78126 cri.go:89] found id: ""
	I1011 22:28:25.954741   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.954750   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:25.954758   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:25.954824   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:25.987943   78126 cri.go:89] found id: ""
	I1011 22:28:25.987976   78126 logs.go:282] 0 containers: []
	W1011 22:28:25.987989   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:25.987996   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:25.988060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:26.022071   78126 cri.go:89] found id: ""
	I1011 22:28:26.022102   78126 logs.go:282] 0 containers: []
	W1011 22:28:26.022114   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:26.022125   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:26.022142   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:26.035985   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:26.036010   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:26.103770   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:26.103790   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:26.103807   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:26.179372   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:26.179413   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:26.228037   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:26.228093   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:24.450975   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:26.451800   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:29.147583   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.644088   77526 pod_ready.go:103] pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:32.137388   77526 pod_ready.go:82] duration metric: took 4m0.000065444s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:32.137437   77526 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9xr4k" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:32.137454   77526 pod_ready.go:39] duration metric: took 4m13.67950194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:32.137478   77526 kubeadm.go:597] duration metric: took 4m21.517496572s to restartPrimaryControlPlane
	W1011 22:28:32.137532   77526 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:32.137562   77526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:29.150291   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:31.649055   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:28.779814   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:28.793001   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:28.793058   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:28.831011   78126 cri.go:89] found id: ""
	I1011 22:28:28.831033   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.831041   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:28.831046   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:28.831102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:28.872907   78126 cri.go:89] found id: ""
	I1011 22:28:28.872942   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.872955   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:28.872964   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:28.873042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:28.906516   78126 cri.go:89] found id: ""
	I1011 22:28:28.906543   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.906554   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:28.906560   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:28.906637   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:28.943208   78126 cri.go:89] found id: ""
	I1011 22:28:28.943241   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.943253   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:28.943260   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:28.943322   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:28.981065   78126 cri.go:89] found id: ""
	I1011 22:28:28.981099   78126 logs.go:282] 0 containers: []
	W1011 22:28:28.981111   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:28.981119   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:28.981187   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:29.016532   78126 cri.go:89] found id: ""
	I1011 22:28:29.016559   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.016570   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:29.016577   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:29.016634   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:29.051240   78126 cri.go:89] found id: ""
	I1011 22:28:29.051273   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.051283   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:29.051290   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:29.051353   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:29.087202   78126 cri.go:89] found id: ""
	I1011 22:28:29.087237   78126 logs.go:282] 0 containers: []
	W1011 22:28:29.087249   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:29.087259   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:29.087273   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:29.139617   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:29.139657   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:29.155511   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:29.155535   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:29.221989   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:29.222012   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:29.222028   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:29.299814   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:29.299866   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:31.843996   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:31.857582   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:31.857638   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:31.897952   78126 cri.go:89] found id: ""
	I1011 22:28:31.897980   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.897989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:31.897995   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:31.898055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:31.936648   78126 cri.go:89] found id: ""
	I1011 22:28:31.936679   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.936690   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:31.936700   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:31.936768   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:31.975518   78126 cri.go:89] found id: ""
	I1011 22:28:31.975540   78126 logs.go:282] 0 containers: []
	W1011 22:28:31.975548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:31.975554   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:31.975610   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:32.010062   78126 cri.go:89] found id: ""
	I1011 22:28:32.010089   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.010100   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:32.010107   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:32.010165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:32.048251   78126 cri.go:89] found id: ""
	I1011 22:28:32.048281   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.048292   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:32.048299   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:32.048366   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:32.082947   78126 cri.go:89] found id: ""
	I1011 22:28:32.082983   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.082994   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:32.083002   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:32.083063   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:32.115322   78126 cri.go:89] found id: ""
	I1011 22:28:32.115349   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.115358   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:32.115364   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:32.115423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:32.151832   78126 cri.go:89] found id: ""
	I1011 22:28:32.151859   78126 logs.go:282] 0 containers: []
	W1011 22:28:32.151875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:32.151883   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:32.151892   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:32.209697   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:32.209728   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:32.226637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:32.226676   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:32.297765   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:32.297791   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:32.297810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:32.378767   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:32.378800   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:28.951749   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:30.952578   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.149312   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:36.648952   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:34.922833   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:34.936072   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:34.936139   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:34.975940   78126 cri.go:89] found id: ""
	I1011 22:28:34.975965   78126 logs.go:282] 0 containers: []
	W1011 22:28:34.975975   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:34.975983   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:34.976043   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:35.010094   78126 cri.go:89] found id: ""
	I1011 22:28:35.010123   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.010134   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:35.010141   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:35.010188   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:35.045925   78126 cri.go:89] found id: ""
	I1011 22:28:35.045952   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.045963   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:35.045969   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:35.046029   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:35.083905   78126 cri.go:89] found id: ""
	I1011 22:28:35.083933   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.083944   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:35.083951   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:35.084013   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:35.118515   78126 cri.go:89] found id: ""
	I1011 22:28:35.118542   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.118552   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:35.118559   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:35.118641   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:35.155057   78126 cri.go:89] found id: ""
	I1011 22:28:35.155084   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.155093   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:35.155105   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:35.155171   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:35.195803   78126 cri.go:89] found id: ""
	I1011 22:28:35.195833   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.195844   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:35.195852   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:35.195921   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:35.232921   78126 cri.go:89] found id: ""
	I1011 22:28:35.232950   78126 logs.go:282] 0 containers: []
	W1011 22:28:35.232960   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:35.232970   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:35.232983   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:35.312018   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:35.312055   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:35.353234   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:35.353267   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:35.405044   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:35.405082   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:35.419342   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:35.419381   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:35.496100   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:33.451778   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:35.951964   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:39.148016   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:41.149360   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:37.996977   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:38.010993   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:38.011055   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:38.044961   78126 cri.go:89] found id: ""
	I1011 22:28:38.044985   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.044993   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:38.044999   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:38.045060   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:38.079701   78126 cri.go:89] found id: ""
	I1011 22:28:38.079725   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.079735   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:38.079743   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:38.079807   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:38.112510   78126 cri.go:89] found id: ""
	I1011 22:28:38.112537   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.112548   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:38.112555   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:38.112617   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:38.146954   78126 cri.go:89] found id: ""
	I1011 22:28:38.146981   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.146991   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:38.146998   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:38.147069   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:38.181637   78126 cri.go:89] found id: ""
	I1011 22:28:38.181659   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.181667   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:38.181672   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:38.181719   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:38.215830   78126 cri.go:89] found id: ""
	I1011 22:28:38.215853   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.215862   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:38.215867   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:38.215925   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:38.251494   78126 cri.go:89] found id: ""
	I1011 22:28:38.251524   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.251535   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:38.251542   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:38.251607   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:38.286173   78126 cri.go:89] found id: ""
	I1011 22:28:38.286206   78126 logs.go:282] 0 containers: []
	W1011 22:28:38.286218   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:38.286228   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:38.286246   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:38.335217   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:38.335248   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:38.349071   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:38.349099   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:38.420227   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.420262   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:38.420277   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:38.499572   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:38.499604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.043801   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:41.056685   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:41.056741   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:41.094968   78126 cri.go:89] found id: ""
	I1011 22:28:41.094992   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.094999   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:41.095005   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:41.095050   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:41.127578   78126 cri.go:89] found id: ""
	I1011 22:28:41.127603   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.127611   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:41.127617   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:41.127672   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:41.161913   78126 cri.go:89] found id: ""
	I1011 22:28:41.161936   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.161942   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:41.161948   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:41.161998   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:41.198196   78126 cri.go:89] found id: ""
	I1011 22:28:41.198223   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.198233   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:41.198238   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:41.198298   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:41.231426   78126 cri.go:89] found id: ""
	I1011 22:28:41.231452   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.231467   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:41.231472   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:41.231528   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:41.268432   78126 cri.go:89] found id: ""
	I1011 22:28:41.268454   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.268468   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:41.268474   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:41.268527   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:41.303246   78126 cri.go:89] found id: ""
	I1011 22:28:41.303269   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.303276   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:41.303286   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:41.303340   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:41.337632   78126 cri.go:89] found id: ""
	I1011 22:28:41.337654   78126 logs.go:282] 0 containers: []
	W1011 22:28:41.337663   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:41.337671   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:41.337682   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:41.418788   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:41.418821   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:41.461409   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:41.461441   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:41.513788   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:41.513818   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:41.528305   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:41.528336   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:41.591163   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:38.454387   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:40.952061   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:43.649642   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:46.148528   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:44.091344   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:44.104358   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:44.104412   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:44.140959   78126 cri.go:89] found id: ""
	I1011 22:28:44.140981   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.140989   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:44.140994   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:44.141042   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:44.174812   78126 cri.go:89] found id: ""
	I1011 22:28:44.174842   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.174852   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:44.174859   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:44.174922   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:44.209924   78126 cri.go:89] found id: ""
	I1011 22:28:44.209954   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.209964   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:44.209971   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:44.210030   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:44.241708   78126 cri.go:89] found id: ""
	I1011 22:28:44.241737   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.241746   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:44.241751   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:44.241798   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:44.274637   78126 cri.go:89] found id: ""
	I1011 22:28:44.274661   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.274669   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:44.274674   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:44.274731   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:44.307920   78126 cri.go:89] found id: ""
	I1011 22:28:44.307953   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.307960   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:44.307975   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:44.308038   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:44.339957   78126 cri.go:89] found id: ""
	I1011 22:28:44.339984   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.339995   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:44.340003   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:44.340051   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:44.373589   78126 cri.go:89] found id: ""
	I1011 22:28:44.373619   78126 logs.go:282] 0 containers: []
	W1011 22:28:44.373630   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:44.373641   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:44.373655   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:44.458563   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:44.458597   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:44.497194   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:44.497223   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:44.548541   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:44.548577   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:44.562167   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:44.562192   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:44.629000   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.129736   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:47.143586   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:47.143653   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:47.180419   78126 cri.go:89] found id: ""
	I1011 22:28:47.180443   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.180451   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:47.180457   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:47.180504   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:47.217139   78126 cri.go:89] found id: ""
	I1011 22:28:47.217162   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.217169   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:47.217175   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:47.217225   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:47.255554   78126 cri.go:89] found id: ""
	I1011 22:28:47.255579   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.255587   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:47.255593   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:47.255656   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:47.289782   78126 cri.go:89] found id: ""
	I1011 22:28:47.289806   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.289813   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:47.289819   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:47.289863   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:47.323887   78126 cri.go:89] found id: ""
	I1011 22:28:47.323917   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.323928   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:47.323936   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:47.323996   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:47.358274   78126 cri.go:89] found id: ""
	I1011 22:28:47.358297   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.358306   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:47.358312   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:47.358356   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:47.391796   78126 cri.go:89] found id: ""
	I1011 22:28:47.391824   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.391835   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:47.391842   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:47.391901   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:47.428492   78126 cri.go:89] found id: ""
	I1011 22:28:47.428516   78126 logs.go:282] 0 containers: []
	W1011 22:28:47.428525   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:47.428533   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:47.428544   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:47.493580   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:47.493609   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:47.510709   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:47.510740   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:47.589656   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:47.589680   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:47.589695   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:47.682726   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:47.682760   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:43.451280   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:45.952227   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.451044   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:48.149006   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.649552   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:50.223845   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:50.238227   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:50.238305   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:50.273569   78126 cri.go:89] found id: ""
	I1011 22:28:50.273597   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.273605   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:50.273612   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:50.273663   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:50.307556   78126 cri.go:89] found id: ""
	I1011 22:28:50.307582   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.307593   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:50.307600   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:50.307660   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:50.342553   78126 cri.go:89] found id: ""
	I1011 22:28:50.342578   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.342589   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:50.342597   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:50.342667   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:50.377318   78126 cri.go:89] found id: ""
	I1011 22:28:50.377345   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.377356   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:50.377363   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:50.377423   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:50.414137   78126 cri.go:89] found id: ""
	I1011 22:28:50.414164   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.414174   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:50.414180   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:50.414250   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:50.450821   78126 cri.go:89] found id: ""
	I1011 22:28:50.450848   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.450858   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:50.450865   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:50.450944   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:50.483992   78126 cri.go:89] found id: ""
	I1011 22:28:50.484018   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.484029   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:50.484036   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:50.484102   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:50.516837   78126 cri.go:89] found id: ""
	I1011 22:28:50.516864   78126 logs.go:282] 0 containers: []
	W1011 22:28:50.516875   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:50.516885   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:50.516897   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:50.569676   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:50.569718   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:50.582873   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:50.582898   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:50.655017   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:50.655042   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:50.655056   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:50.741118   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:50.741148   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:50.451478   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:52.951299   77741 pod_ready.go:103] pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:53.445808   77741 pod_ready.go:82] duration metric: took 4m0.000846456s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" ...
	E1011 22:28:53.445846   77741 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l7xbw" in "kube-system" namespace to be "Ready" (will not retry!)
	I1011 22:28:53.445869   77741 pod_ready.go:39] duration metric: took 4m16.735338637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:28:53.445899   77741 kubeadm.go:597] duration metric: took 4m23.626843864s to restartPrimaryControlPlane
	W1011 22:28:53.445964   77741 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:53.445996   77741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:53.279343   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:53.293048   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:28:53.293112   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:28:53.335650   78126 cri.go:89] found id: ""
	I1011 22:28:53.335674   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.335681   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:28:53.335689   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:28:53.335748   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:28:53.368226   78126 cri.go:89] found id: ""
	I1011 22:28:53.368254   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.368264   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:28:53.368270   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:28:53.368332   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:28:53.401409   78126 cri.go:89] found id: ""
	I1011 22:28:53.401439   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.401450   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:28:53.401456   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:28:53.401517   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:28:53.436078   78126 cri.go:89] found id: ""
	I1011 22:28:53.436100   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.436108   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:28:53.436114   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:28:53.436165   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:28:53.474986   78126 cri.go:89] found id: ""
	I1011 22:28:53.475016   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.475026   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:28:53.475032   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:28:53.475092   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:28:53.510715   78126 cri.go:89] found id: ""
	I1011 22:28:53.510746   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.510758   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:28:53.510767   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:28:53.510833   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:28:53.547239   78126 cri.go:89] found id: ""
	I1011 22:28:53.547266   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.547275   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:28:53.547280   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:28:53.547326   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:28:53.588546   78126 cri.go:89] found id: ""
	I1011 22:28:53.588572   78126 logs.go:282] 0 containers: []
	W1011 22:28:53.588584   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:28:53.588594   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:28:53.588604   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:28:53.640404   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:28:53.640436   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:28:53.656637   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:28:53.656668   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:28:53.726870   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:28:53.726893   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:28:53.726907   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1011 22:28:53.807490   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:28:53.807527   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:28:56.344899   78126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:28:56.357272   78126 kubeadm.go:597] duration metric: took 4m3.213709713s to restartPrimaryControlPlane
	W1011 22:28:56.357335   78126 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:28:56.357355   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:28:56.806057   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:56.820534   78126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:56.830947   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:56.841099   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:56.841123   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:56.841169   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:56.850400   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:56.850444   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:56.859913   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:56.869056   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:56.869114   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:56.878858   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.888396   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:56.888439   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:56.897855   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:56.907385   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:56.907452   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:56.916993   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:56.991551   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:28:56.991644   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:57.138652   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:57.138815   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:57.138921   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:28:57.316973   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:53.148309   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:55.149231   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:57.318686   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:57.318798   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:57.318885   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:57.319031   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:57.319101   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:57.319203   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:57.319296   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:57.319629   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:57.319985   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:57.320444   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:57.320927   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:57.321078   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:57.321168   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:57.446174   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:57.989775   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:58.137706   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:58.277600   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:58.297823   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:58.302288   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:58.302575   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:58.474816   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:58.243748   77526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.106148594s)
	I1011 22:28:58.243837   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:28:58.263915   77526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:28:58.281349   77526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:28:58.297636   77526 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:28:58.297661   77526 kubeadm.go:157] found existing configuration files:
	
	I1011 22:28:58.297710   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:28:58.311371   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:28:58.311444   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:28:58.330584   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:28:58.350348   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:28:58.350403   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:28:58.376417   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.390350   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:28:58.390399   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:28:58.404955   77526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:28:58.416263   77526 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:28:58.416322   77526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:28:58.425942   77526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:28:58.478782   77526 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:28:58.478835   77526 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:28:58.590185   77526 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:28:58.590333   77526 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:28:58.590451   77526 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:28:58.598371   77526 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:28:58.600253   77526 out.go:235]   - Generating certificates and keys ...
	I1011 22:28:58.600357   77526 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:28:58.600458   77526 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:28:58.600569   77526 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:28:58.600657   77526 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:28:58.600761   77526 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:28:58.600827   77526 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:28:58.600913   77526 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:28:58.601018   77526 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:28:58.601122   77526 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:28:58.601250   77526 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:28:58.601335   77526 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:28:58.601417   77526 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:28:58.951248   77526 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:28:59.187453   77526 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:28:59.496055   77526 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:28:59.583363   77526 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:28:59.747699   77526 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:28:59.748339   77526 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:59.750963   77526 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:28:59.752710   77526 out.go:235]   - Booting up control plane ...
	I1011 22:28:59.752858   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:59.752956   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:59.753174   77526 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:59.770682   77526 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:28:59.776919   77526 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:28:59.776989   77526 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:28:59.900964   77526 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:28:59.901122   77526 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:00.402400   77526 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.862362ms
	I1011 22:29:00.402529   77526 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:28:57.648367   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:00.148371   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:02.153536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:28:58.476523   78126 out.go:235]   - Booting up control plane ...
	I1011 22:28:58.476658   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:28:58.481519   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:28:58.482472   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:28:58.484150   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:28:58.488685   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:29:05.905921   77526 kubeadm.go:310] [api-check] The API server is healthy after 5.501955207s
	I1011 22:29:05.918054   77526 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:05.936720   77526 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:05.982293   77526 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:05.982571   77526 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-223942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:06.007168   77526 kubeadm.go:310] [bootstrap-token] Using token: a4lu2p.4yfrrazoy97j5yu0
	I1011 22:29:06.008642   77526 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:06.008749   77526 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:06.020393   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:06.032191   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:06.039269   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:06.043990   77526 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:06.053648   77526 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:06.312388   77526 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:06.740160   77526 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:07.315305   77526 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:07.317697   77526 kubeadm.go:310] 
	I1011 22:29:07.317793   77526 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:07.317806   77526 kubeadm.go:310] 
	I1011 22:29:07.317929   77526 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:07.317950   77526 kubeadm.go:310] 
	I1011 22:29:07.318009   77526 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:07.318126   77526 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:07.318222   77526 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:07.318232   77526 kubeadm.go:310] 
	I1011 22:29:07.318281   77526 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:07.318289   77526 kubeadm.go:310] 
	I1011 22:29:07.318339   77526 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:07.318350   77526 kubeadm.go:310] 
	I1011 22:29:07.318424   77526 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:07.318528   77526 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:07.318630   77526 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:07.318644   77526 kubeadm.go:310] 
	I1011 22:29:07.318750   77526 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:07.318823   77526 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:07.318830   77526 kubeadm.go:310] 
	I1011 22:29:07.318913   77526 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319086   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:07.319124   77526 kubeadm.go:310] 	--control-plane 
	I1011 22:29:07.319133   77526 kubeadm.go:310] 
	I1011 22:29:07.319256   77526 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:07.319264   77526 kubeadm.go:310] 
	I1011 22:29:07.319366   77526 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4lu2p.4yfrrazoy97j5yu0 \
	I1011 22:29:07.319505   77526 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:07.321368   77526 kubeadm.go:310] W1011 22:28:58.449635    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321691   77526 kubeadm.go:310] W1011 22:28:58.450407    2542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:07.321866   77526 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:07.321888   77526 cni.go:84] Creating CNI manager for ""
	I1011 22:29:07.321899   77526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:07.323580   77526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:07.324762   77526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:07.335614   77526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:04.648441   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:06.648506   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:07.354851   77526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:07.355473   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:07.355479   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-223942 minikube.k8s.io/updated_at=2024_10_11T22_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=embed-certs-223942 minikube.k8s.io/primary=true
	I1011 22:29:07.397703   77526 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:07.581167   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.081395   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:08.582200   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.081862   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:09.581361   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.081246   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:10.581754   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.081988   77526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:11.179021   77526 kubeadm.go:1113] duration metric: took 3.82416989s to wait for elevateKubeSystemPrivileges
	I1011 22:29:11.179061   77526 kubeadm.go:394] duration metric: took 5m0.606049956s to StartCluster
	I1011 22:29:11.179086   77526 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.179171   77526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:11.181572   77526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:11.181873   77526 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:11.181938   77526 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:11.182035   77526 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223942"
	I1011 22:29:11.182059   77526 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223942"
	I1011 22:29:11.182060   77526 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223942"
	W1011 22:29:11.182070   77526 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:11.182078   77526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223942"
	I1011 22:29:11.182102   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182114   77526 config.go:182] Loaded profile config "embed-certs-223942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:11.182091   77526 addons.go:69] Setting metrics-server=true in profile "embed-certs-223942"
	I1011 22:29:11.182147   77526 addons.go:234] Setting addon metrics-server=true in "embed-certs-223942"
	W1011 22:29:11.182161   77526 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:11.182196   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.182515   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182558   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182579   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.182550   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.182692   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.183573   77526 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:11.184930   77526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:11.198456   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1011 22:29:11.198666   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I1011 22:29:11.199044   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199141   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.199592   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199607   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199726   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.199744   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.199950   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200104   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.200248   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.200557   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.200608   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.201637   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1011 22:29:11.202066   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.202541   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.202560   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.202894   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.203434   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.203474   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.204227   77526 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223942"
	W1011 22:29:11.204249   77526 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:11.204281   77526 host.go:66] Checking if "embed-certs-223942" exists ...
	I1011 22:29:11.204663   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.204707   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.218765   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I1011 22:29:11.218894   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I1011 22:29:11.219238   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219244   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.219747   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219772   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.219949   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.219970   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.220019   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220167   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220232   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.220785   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.220847   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1011 22:29:11.221152   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.221591   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.221614   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.222116   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.222135   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222401   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.222916   77526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:11.222955   77526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:11.224006   77526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:11.224007   77526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:11.225424   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:11.225455   77526 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:11.225474   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.226095   77526 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.226115   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:11.226131   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.228914   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229448   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.229472   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229542   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.229584   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.229744   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230021   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.230025   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230037   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.230118   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.230496   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.230648   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.230781   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.230897   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.238742   77526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1011 22:29:11.239211   77526 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:11.239762   77526 main.go:141] libmachine: Using API Version  1
	I1011 22:29:11.239786   77526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:11.240061   77526 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:11.240238   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetState
	I1011 22:29:11.241740   77526 main.go:141] libmachine: (embed-certs-223942) Calling .DriverName
	I1011 22:29:11.241967   77526 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.241986   77526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:11.242007   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHHostname
	I1011 22:29:11.244886   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245237   77526 main.go:141] libmachine: (embed-certs-223942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:2c:1c", ip: ""} in network mk-embed-certs-223942: {Iface:virbr4 ExpiryTime:2024-10-11 23:15:26 +0000 UTC Type:0 Mac:52:54:00:06:2c:1c Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:embed-certs-223942 Clientid:01:52:54:00:06:2c:1c}
	I1011 22:29:11.245260   77526 main.go:141] libmachine: (embed-certs-223942) DBG | domain embed-certs-223942 has defined IP address 192.168.72.238 and MAC address 52:54:00:06:2c:1c in network mk-embed-certs-223942
	I1011 22:29:11.245501   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHPort
	I1011 22:29:11.245684   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHKeyPath
	I1011 22:29:11.245882   77526 main.go:141] libmachine: (embed-certs-223942) Calling .GetSSHUsername
	I1011 22:29:11.246052   77526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/embed-certs-223942/id_rsa Username:docker}
	I1011 22:29:11.365926   77526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:11.391766   77526 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401923   77526 node_ready.go:49] node "embed-certs-223942" has status "Ready":"True"
	I1011 22:29:11.401943   77526 node_ready.go:38] duration metric: took 10.139287ms for node "embed-certs-223942" to be "Ready" ...
	I1011 22:29:11.401952   77526 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:11.406561   77526 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:11.460959   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:11.460992   77526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:11.475600   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:11.481436   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:11.481465   77526 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:11.515478   77526 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.515500   77526 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:11.558164   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:11.569398   77526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:11.795782   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.795805   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796093   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:11.796119   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796137   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.796152   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.796163   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.796373   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.796389   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809155   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:11.809176   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:11.809439   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:11.809457   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:11.809463   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475441   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475469   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.475720   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.475769   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.475789   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.475805   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.475815   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.476016   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.476027   77526 main.go:141] libmachine: (embed-certs-223942) DBG | Closing plugin on server side
	I1011 22:29:12.476031   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.476041   77526 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223942"
	I1011 22:29:12.503190   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503219   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503530   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503574   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.503588   77526 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:12.503598   77526 main.go:141] libmachine: (embed-certs-223942) Calling .Close
	I1011 22:29:12.503834   77526 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:12.503850   77526 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:12.505379   77526 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1011 22:29:09.149809   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:11.650232   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:12.506382   77526 addons.go:510] duration metric: took 1.324453305s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1011 22:29:13.412840   77526 pod_ready.go:103] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:13.918905   77526 pod_ready.go:93] pod "etcd-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:13.918926   77526 pod_ready.go:82] duration metric: took 2.512345346s for pod "etcd-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:13.918936   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:15.925307   77526 pod_ready.go:103] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:14.149051   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:16.649622   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:17.925327   77526 pod_ready.go:93] pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.925353   77526 pod_ready.go:82] duration metric: took 4.006410198s for pod "kube-apiserver-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.925366   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929846   77526 pod_ready.go:93] pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.929872   77526 pod_ready.go:82] duration metric: took 4.495642ms for pod "kube-controller-manager-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.929883   77526 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933635   77526 pod_ready.go:93] pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:17.933652   77526 pod_ready.go:82] duration metric: took 3.761139ms for pod "kube-scheduler-embed-certs-223942" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:17.933661   77526 pod_ready.go:39] duration metric: took 6.531698315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:17.933677   77526 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:17.933732   77526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:17.950153   77526 api_server.go:72] duration metric: took 6.768243331s to wait for apiserver process to appear ...
	I1011 22:29:17.950174   77526 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:17.950192   77526 api_server.go:253] Checking apiserver healthz at https://192.168.72.238:8443/healthz ...
	I1011 22:29:17.953743   77526 api_server.go:279] https://192.168.72.238:8443/healthz returned 200:
	ok
	I1011 22:29:17.954586   77526 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:17.954610   77526 api_server.go:131] duration metric: took 4.428307ms to wait for apiserver health ...
	I1011 22:29:17.954629   77526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:17.959411   77526 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:17.959432   77526 system_pods.go:61] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.959438   77526 system_pods.go:61] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.959443   77526 system_pods.go:61] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.959447   77526 system_pods.go:61] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.959451   77526 system_pods.go:61] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.959454   77526 system_pods.go:61] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.959457   77526 system_pods.go:61] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.959468   77526 system_pods.go:61] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.959473   77526 system_pods.go:61] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.959480   77526 system_pods.go:74] duration metric: took 4.84106ms to wait for pod list to return data ...
	I1011 22:29:17.959488   77526 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:17.962273   77526 default_sa.go:45] found service account: "default"
	I1011 22:29:17.962294   77526 default_sa.go:55] duration metric: took 2.80012ms for default service account to be created ...
	I1011 22:29:17.962302   77526 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:17.966653   77526 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:17.966675   77526 system_pods.go:89] "coredns-7c65d6cfc9-bchd4" [9888edee-2d83-4ac7-9dcf-14a0d4c1adfc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1011 22:29:17.966681   77526 system_pods.go:89] "coredns-7c65d6cfc9-qcct7" [addf150f-9f60-4184-9a87-8034b9d3fd8f] Running
	I1011 22:29:17.966686   77526 system_pods.go:89] "etcd-embed-certs-223942" [6f144b6d-5992-4780-b005-359c9bab2494] Running
	I1011 22:29:17.966691   77526 system_pods.go:89] "kube-apiserver-embed-certs-223942" [a3dbccdf-db70-46cb-b829-24d2856b4e1c] Running
	I1011 22:29:17.966695   77526 system_pods.go:89] "kube-controller-manager-embed-certs-223942" [efbd6ee8-435e-4842-a907-d63ab3117a4b] Running
	I1011 22:29:17.966698   77526 system_pods.go:89] "kube-proxy-8qv4k" [76dc11bd-3597-4268-839e-9bace3c3e897] Running
	I1011 22:29:17.966702   77526 system_pods.go:89] "kube-scheduler-embed-certs-223942" [a9d4e133-6af7-43f1-a4a7-76b1334be481] Running
	I1011 22:29:17.966741   77526 system_pods.go:89] "metrics-server-6867b74b74-5s6hn" [526f3ae3-7af0-4542-87d4-66b0281b4058] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:17.966751   77526 system_pods.go:89] "storage-provisioner" [60223d53-4645-45d1-8546-9050636a6205] Running
	I1011 22:29:17.966759   77526 system_pods.go:126] duration metric: took 4.452902ms to wait for k8s-apps to be running ...
	I1011 22:29:17.966766   77526 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:17.966807   77526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:17.982751   77526 system_svc.go:56] duration metric: took 15.979158ms WaitForService to wait for kubelet
	I1011 22:29:17.982770   77526 kubeadm.go:582] duration metric: took 6.800865436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:17.982788   77526 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:17.985340   77526 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:17.985361   77526 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:17.985373   77526 node_conditions.go:105] duration metric: took 2.578879ms to run NodePressure ...
	I1011 22:29:17.985385   77526 start.go:241] waiting for startup goroutines ...
	I1011 22:29:17.985398   77526 start.go:246] waiting for cluster config update ...
	I1011 22:29:17.985415   77526 start.go:255] writing updated cluster config ...
	I1011 22:29:17.985668   77526 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:18.034091   77526 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:18.036159   77526 out.go:177] * Done! kubectl is now configured to use "embed-certs-223942" cluster and "default" namespace by default
	I1011 22:29:19.671974   77741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225955809s)
	I1011 22:29:19.672048   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:19.689229   77741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:29:19.701141   77741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:29:19.714596   77741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:29:19.714630   77741 kubeadm.go:157] found existing configuration files:
	
	I1011 22:29:19.714674   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1011 22:29:19.729207   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:29:19.729273   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:29:19.739052   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1011 22:29:19.748101   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:29:19.748162   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:29:19.757518   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.766689   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:29:19.766754   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:29:19.776197   77741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1011 22:29:19.785329   77741 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:29:19.785381   77741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:29:19.794742   77741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:29:19.837814   77741 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:29:19.837936   77741 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:29:19.956401   77741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:29:19.956502   77741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:29:19.956574   77741 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:29:19.965603   77741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:29:19.967637   77741 out.go:235]   - Generating certificates and keys ...
	I1011 22:29:19.967726   77741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:29:19.967793   77741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:29:19.967875   77741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:29:19.967965   77741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:29:19.968066   77741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:29:19.968139   77741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:29:19.968224   77741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:29:19.968319   77741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:29:19.968435   77741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:29:19.968545   77741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:29:19.968608   77741 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:29:19.968701   77741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:29:20.266256   77741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:29:20.353124   77741 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:29:20.693912   77741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:29:20.814227   77741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:29:21.028714   77741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:29:21.029382   77741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:29:21.032065   77741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:29:19.149346   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.648583   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:21.033900   77741 out.go:235]   - Booting up control plane ...
	I1011 22:29:21.034020   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:29:21.034134   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:29:21.034236   77741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:29:21.053259   77741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:29:21.060157   77741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:29:21.060229   77741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:29:21.190140   77741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:29:21.190325   77741 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:29:21.691954   77741 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78398ms
	I1011 22:29:21.692069   77741 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:29:26.696518   77741 kubeadm.go:310] [api-check] The API server is healthy after 5.002229227s
	I1011 22:29:26.710581   77741 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:29:26.726686   77741 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:29:26.759596   77741 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:29:26.759894   77741 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-070708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:29:26.769529   77741 kubeadm.go:310] [bootstrap-token] Using token: dhosfn.441jcramrxgiydi4
	I1011 22:29:24.149380   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.647490   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:26.770660   77741 out.go:235]   - Configuring RBAC rules ...
	I1011 22:29:26.770801   77741 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:29:26.775859   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:29:26.783572   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:29:26.789736   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:29:26.793026   77741 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:29:26.797814   77741 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:29:27.102055   77741 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:29:27.537636   77741 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:29:28.102099   77741 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:29:28.103130   77741 kubeadm.go:310] 
	I1011 22:29:28.103241   77741 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:29:28.103264   77741 kubeadm.go:310] 
	I1011 22:29:28.103371   77741 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:29:28.103379   77741 kubeadm.go:310] 
	I1011 22:29:28.103400   77741 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:29:28.103454   77741 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:29:28.103506   77741 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:29:28.103510   77741 kubeadm.go:310] 
	I1011 22:29:28.103565   77741 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:29:28.103569   77741 kubeadm.go:310] 
	I1011 22:29:28.103618   77741 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:29:28.103624   77741 kubeadm.go:310] 
	I1011 22:29:28.103666   77741 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:29:28.103778   77741 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:29:28.103874   77741 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:29:28.103882   77741 kubeadm.go:310] 
	I1011 22:29:28.103960   77741 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:29:28.104023   77741 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:29:28.104029   77741 kubeadm.go:310] 
	I1011 22:29:28.104096   77741 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104179   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:29:28.104199   77741 kubeadm.go:310] 	--control-plane 
	I1011 22:29:28.104205   77741 kubeadm.go:310] 
	I1011 22:29:28.104271   77741 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:29:28.104277   77741 kubeadm.go:310] 
	I1011 22:29:28.104384   77741 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dhosfn.441jcramrxgiydi4 \
	I1011 22:29:28.104513   77741 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:29:28.105322   77741 kubeadm.go:310] W1011 22:29:19.811300    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105623   77741 kubeadm.go:310] W1011 22:29:19.812133    2545 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:29:28.105772   77741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:29:28.105796   77741 cni.go:84] Creating CNI manager for ""
	I1011 22:29:28.105808   77741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:29:28.107671   77741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:29:28.108911   77741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:29:28.121190   77741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:29:28.143442   77741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:29:28.143523   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.143537   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-070708 minikube.k8s.io/updated_at=2024_10_11T22_29_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=default-k8s-diff-port-070708 minikube.k8s.io/primary=true
	I1011 22:29:28.380171   77741 ops.go:34] apiserver oom_adj: -16
	I1011 22:29:28.380244   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:28.649448   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:31.147882   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:28.880541   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.380686   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:29.880953   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.381236   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:30.880946   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.380516   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:31.880841   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.380874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.880874   77741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:29:32.969809   77741 kubeadm.go:1113] duration metric: took 4.826361525s to wait for elevateKubeSystemPrivileges
	I1011 22:29:32.969844   77741 kubeadm.go:394] duration metric: took 5m3.206576288s to StartCluster
	I1011 22:29:32.969864   77741 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.969949   77741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:29:32.972053   77741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:29:32.972321   77741 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:29:32.972419   77741 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:29:32.972545   77741 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972564   77741 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972572   77741 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:29:32.972580   77741 config.go:182] Loaded profile config "default-k8s-diff-port-070708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:29:32.972577   77741 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972601   77741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-070708"
	I1011 22:29:32.972590   77741 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-070708"
	I1011 22:29:32.972621   77741 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.972631   77741 addons.go:243] addon metrics-server should already be in state true
	I1011 22:29:32.972676   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972605   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.972952   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.972982   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973051   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973088   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973111   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.973143   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.973995   77741 out.go:177] * Verifying Kubernetes components...
	I1011 22:29:32.975387   77741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:29:32.989010   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1011 22:29:32.989449   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.989866   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I1011 22:29:32.990100   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990127   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.990213   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.990478   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.990668   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.990692   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.991068   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991071   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.991110   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991647   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1011 22:29:32.991671   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.991703   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:32.991966   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:32.992453   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:32.992486   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:32.992808   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:32.992950   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:32.995986   77741 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-070708"
	W1011 22:29:32.996004   77741 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:29:32.996031   77741 host.go:66] Checking if "default-k8s-diff-port-070708" exists ...
	I1011 22:29:32.996271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:32.996311   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.010650   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I1011 22:29:33.010949   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1011 22:29:33.011111   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011350   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I1011 22:29:33.011490   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.011509   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.011838   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.011936   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012113   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.012272   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012283   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.012338   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.012663   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.012877   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.012897   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.013271   77741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:29:33.013307   77741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:29:33.013511   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.013691   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.014538   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.015400   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.016387   77741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:29:33.017187   77741 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:29:33.018090   77741 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.018111   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:29:33.018130   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.018972   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:29:33.018994   77741 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:29:33.019015   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.021827   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022205   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.022226   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.022391   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.022513   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.022704   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.022865   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.023070   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023552   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.023574   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.023872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.024067   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.024222   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.024376   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.030089   77741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I1011 22:29:33.030477   77741 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:29:33.030929   77741 main.go:141] libmachine: Using API Version  1
	I1011 22:29:33.030954   77741 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:29:33.031352   77741 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:29:33.031571   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetState
	I1011 22:29:33.033098   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .DriverName
	I1011 22:29:33.033335   77741 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.033351   77741 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:29:33.033366   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHHostname
	I1011 22:29:33.036390   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.036758   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:e0:21", ip: ""} in network mk-default-k8s-diff-port-070708: {Iface:virbr1 ExpiryTime:2024-10-11 23:24:16 +0000 UTC Type:0 Mac:52:54:00:9d:e0:21 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-070708 Clientid:01:52:54:00:9d:e0:21}
	I1011 22:29:33.036780   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | domain default-k8s-diff-port-070708 has defined IP address 192.168.39.162 and MAC address 52:54:00:9d:e0:21 in network mk-default-k8s-diff-port-070708
	I1011 22:29:33.037025   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHPort
	I1011 22:29:33.037173   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHKeyPath
	I1011 22:29:33.037322   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .GetSSHUsername
	I1011 22:29:33.037467   77741 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/default-k8s-diff-port-070708/id_rsa Username:docker}
	I1011 22:29:33.201955   77741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:29:33.220870   77741 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229595   77741 node_ready.go:49] node "default-k8s-diff-port-070708" has status "Ready":"True"
	I1011 22:29:33.229615   77741 node_ready.go:38] duration metric: took 8.713422ms for node "default-k8s-diff-port-070708" to be "Ready" ...
	I1011 22:29:33.229623   77741 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:33.237626   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:33.298146   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:29:33.298166   77741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:29:33.308268   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:29:33.320862   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:29:33.346501   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:29:33.346536   77741 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:29:33.406404   77741 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.406435   77741 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:29:33.480527   77741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:29:33.629133   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629162   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.629545   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.629564   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.629565   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.629616   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.629625   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.630896   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.630904   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.630918   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:33.636620   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:33.636640   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:33.636979   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:33.636989   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:33.637001   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305476   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305507   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.305773   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.305798   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.305809   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.305821   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.306123   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.306168   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.306128   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.756210   77741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.275647241s)
	I1011 22:29:34.756257   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756271   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756536   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756558   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756567   77741 main.go:141] libmachine: Making call to close driver server
	I1011 22:29:34.756575   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) Calling .Close
	I1011 22:29:34.756844   77741 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:29:34.756891   77741 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:29:34.756911   77741 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-070708"
	I1011 22:29:34.756872   77741 main.go:141] libmachine: (default-k8s-diff-port-070708) DBG | Closing plugin on server side
	I1011 22:29:34.759057   77741 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1011 22:29:33.148846   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:35.649536   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:34.760328   77741 addons.go:510] duration metric: took 1.787917365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1011 22:29:34.764676   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:34.764703   77741 pod_ready.go:82] duration metric: took 1.527054334s for pod "coredns-7c65d6cfc9-gtw9g" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:34.764716   77741 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773717   77741 pod_ready.go:93] pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.773739   77741 pod_ready.go:82] duration metric: took 1.009014594s for pod "coredns-7c65d6cfc9-zvctp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.773747   77741 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779537   77741 pod_ready.go:93] pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:35.779554   77741 pod_ready.go:82] duration metric: took 5.801388ms for pod "etcd-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:35.779562   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785272   77741 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:36.785302   77741 pod_ready.go:82] duration metric: took 1.005732291s for pod "kube-apiserver-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:36.785316   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:38.790774   77741 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.790257   77741 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.790285   77741 pod_ready.go:82] duration metric: took 4.004960127s for pod "kube-controller-manager-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.790298   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794434   77741 pod_ready.go:93] pod "kube-proxy-f5jxp" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.794457   77741 pod_ready.go:82] duration metric: took 4.15174ms for pod "kube-proxy-f5jxp" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.794468   77741 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797928   77741 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace has status "Ready":"True"
	I1011 22:29:40.797942   77741 pod_ready.go:82] duration metric: took 3.468527ms for pod "kube-scheduler-default-k8s-diff-port-070708" in "kube-system" namespace to be "Ready" ...
	I1011 22:29:40.797949   77741 pod_ready.go:39] duration metric: took 7.568316879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:40.797960   77741 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:29:40.798002   77741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:29:40.813652   77741 api_server.go:72] duration metric: took 7.841294422s to wait for apiserver process to appear ...
	I1011 22:29:40.813672   77741 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:29:40.813689   77741 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1011 22:29:40.817412   77741 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1011 22:29:40.818090   77741 api_server.go:141] control plane version: v1.31.1
	I1011 22:29:40.818107   77741 api_server.go:131] duration metric: took 4.42852ms to wait for apiserver health ...
	I1011 22:29:40.818114   77741 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:29:40.823188   77741 system_pods.go:59] 9 kube-system pods found
	I1011 22:29:40.823213   77741 system_pods.go:61] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:40.823221   77741 system_pods.go:61] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:40.823227   77741 system_pods.go:61] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:40.823233   77741 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:40.823248   77741 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:40.823255   77741 system_pods.go:61] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:40.823263   77741 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:40.823273   77741 system_pods.go:61] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:40.823284   77741 system_pods.go:61] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:40.823296   77741 system_pods.go:74] duration metric: took 5.17626ms to wait for pod list to return data ...
	I1011 22:29:40.823307   77741 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:29:40.825321   77741 default_sa.go:45] found service account: "default"
	I1011 22:29:40.825336   77741 default_sa.go:55] duration metric: took 2.021143ms for default service account to be created ...
	I1011 22:29:40.825342   77741 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:29:41.026940   77741 system_pods.go:86] 9 kube-system pods found
	I1011 22:29:41.026968   77741 system_pods.go:89] "coredns-7c65d6cfc9-gtw9g" [6f4e99be-007f-4fe6-9436-d1eaaee7ec8e] Running
	I1011 22:29:41.026973   77741 system_pods.go:89] "coredns-7c65d6cfc9-zvctp" [1f0fd5a2-533b-4b3b-8454-0c0cc12cbdb6] Running
	I1011 22:29:41.026978   77741 system_pods.go:89] "etcd-default-k8s-diff-port-070708" [ee89a803-a6fa-4b91-99fc-5f514088483f] Running
	I1011 22:29:41.026982   77741 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-070708" [ff485270-ec5a-4d10-ba15-3b375ca3093c] Running
	I1011 22:29:41.026985   77741 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-070708" [9ae8c471-3dd0-4484-8fc3-a5fbe516428c] Running
	I1011 22:29:41.026989   77741 system_pods.go:89] "kube-proxy-f5jxp" [96a6f08b-a873-4f2a-8ef1-4e573368e28e] Running
	I1011 22:29:41.026992   77741 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-070708" [580cb987-4334-4fd9-8d34-8994a583c568] Running
	I1011 22:29:41.026998   77741 system_pods.go:89] "metrics-server-6867b74b74-88h5g" [d1b9fc5b-820d-4324-9883-70cb84f0044f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:29:41.027001   77741 system_pods.go:89] "storage-provisioner" [8029fb14-2375-4536-8176-c0dcaca6319b] Running
	I1011 22:29:41.027009   77741 system_pods.go:126] duration metric: took 201.663243ms to wait for k8s-apps to be running ...
	I1011 22:29:41.027026   77741 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:29:41.027069   77741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:29:41.042219   77741 system_svc.go:56] duration metric: took 15.183864ms WaitForService to wait for kubelet
	I1011 22:29:41.042245   77741 kubeadm.go:582] duration metric: took 8.069890136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:29:41.042260   77741 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:29:41.224020   77741 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:29:41.224044   77741 node_conditions.go:123] node cpu capacity is 2
	I1011 22:29:41.224057   77741 node_conditions.go:105] duration metric: took 181.791827ms to run NodePressure ...
	I1011 22:29:41.224070   77741 start.go:241] waiting for startup goroutines ...
	I1011 22:29:41.224078   77741 start.go:246] waiting for cluster config update ...
	I1011 22:29:41.224091   77741 start.go:255] writing updated cluster config ...
	I1011 22:29:41.224324   77741 ssh_runner.go:195] Run: rm -f paused
	I1011 22:29:41.270922   77741 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:29:41.272826   77741 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-070708" cluster and "default" namespace by default
	I1011 22:29:38.149579   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:40.648994   77373 pod_ready.go:103] pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace has status "Ready":"False"
	I1011 22:29:41.642042   77373 pod_ready.go:82] duration metric: took 4m0.000063385s for pod "metrics-server-6867b74b74-tk8fq" in "kube-system" namespace to be "Ready" ...
	E1011 22:29:41.642084   77373 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 22:29:41.642099   77373 pod_ready.go:39] duration metric: took 4m11.989411916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:29:41.642124   77373 kubeadm.go:597] duration metric: took 4m19.037142189s to restartPrimaryControlPlane
	W1011 22:29:41.642171   77373 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 22:29:41.642194   77373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:29:38.484793   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:29:38.485706   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:38.485901   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:43.486110   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:43.486369   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:29:53.486142   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:29:53.486390   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:08.331378   77373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.689152762s)
	I1011 22:30:08.331467   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:08.348300   77373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 22:30:08.359480   77373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:08.370317   77373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:08.370344   77373 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:08.370400   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:08.381317   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:08.381392   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:08.392591   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:08.403628   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:08.403695   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:08.415304   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.425512   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:08.425585   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:08.436525   77373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:08.447575   77373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:08.447644   77373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:08.458910   77373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:08.507988   77373 kubeadm.go:310] W1011 22:30:08.465544    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.508469   77373 kubeadm.go:310] W1011 22:30:08.466388    3058 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 22:30:08.640893   77373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:16.843613   77373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 22:30:16.843665   77373 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:16.843739   77373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:16.843849   77373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:16.843963   77373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 22:30:16.844020   77373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:16.845663   77373 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:16.845745   77373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:16.845804   77373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:16.845880   77373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:16.845929   77373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:16.845994   77373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:16.846041   77373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:16.846094   77373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:16.846145   77373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:16.846207   77373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:16.846272   77373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:16.846305   77373 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:16.846355   77373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:16.846402   77373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:16.846453   77373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 22:30:16.846503   77373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:16.846566   77373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:16.846663   77373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:16.846762   77373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:16.846845   77373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:16.848425   77373 out.go:235]   - Booting up control plane ...
	I1011 22:30:16.848538   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:16.848673   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:16.848787   77373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:16.848925   77373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:16.849039   77373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:16.849076   77373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:16.849210   77373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 22:30:16.849351   77373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 22:30:16.849437   77373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.393174ms
	I1011 22:30:16.849498   77373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 22:30:16.849550   77373 kubeadm.go:310] [api-check] The API server is healthy after 5.001429588s
	I1011 22:30:16.849648   77373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 22:30:16.849781   77373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 22:30:16.849869   77373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 22:30:16.850052   77373 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-390487 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 22:30:16.850110   77373 kubeadm.go:310] [bootstrap-token] Using token: fihl2i.d50idwk2axnrw24u
	I1011 22:30:16.851665   77373 out.go:235]   - Configuring RBAC rules ...
	I1011 22:30:16.851802   77373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 22:30:16.851885   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 22:30:16.852036   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 22:30:16.852185   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 22:30:16.852323   77373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 22:30:16.852402   77373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 22:30:16.852499   77373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 22:30:16.852541   77373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 22:30:16.852580   77373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 22:30:16.852586   77373 kubeadm.go:310] 
	I1011 22:30:16.852634   77373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 22:30:16.852640   77373 kubeadm.go:310] 
	I1011 22:30:16.852705   77373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 22:30:16.852711   77373 kubeadm.go:310] 
	I1011 22:30:16.852732   77373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 22:30:16.852805   77373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 22:30:16.852878   77373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 22:30:16.852891   77373 kubeadm.go:310] 
	I1011 22:30:16.852990   77373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 22:30:16.853005   77373 kubeadm.go:310] 
	I1011 22:30:16.853073   77373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 22:30:16.853086   77373 kubeadm.go:310] 
	I1011 22:30:16.853162   77373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 22:30:16.853282   77373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 22:30:16.853341   77373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 22:30:16.853347   77373 kubeadm.go:310] 
	I1011 22:30:16.853424   77373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 22:30:16.853529   77373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 22:30:16.853540   77373 kubeadm.go:310] 
	I1011 22:30:16.853643   77373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.853789   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a \
	I1011 22:30:16.853824   77373 kubeadm.go:310] 	--control-plane 
	I1011 22:30:16.853832   77373 kubeadm.go:310] 
	I1011 22:30:16.853954   77373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 22:30:16.853964   77373 kubeadm.go:310] 
	I1011 22:30:16.854083   77373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fihl2i.d50idwk2axnrw24u \
	I1011 22:30:16.854248   77373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0c031a2a786199a3907107b2ec700dfa716e56b13f1c7cc0133e96b5cdbc48a 
	I1011 22:30:16.854264   77373 cni.go:84] Creating CNI manager for ""
	I1011 22:30:16.854273   77373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 22:30:16.855848   77373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 22:30:16.857089   77373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 22:30:16.868823   77373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 22:30:16.895913   77373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 22:30:16.896017   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:16.896028   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-390487 minikube.k8s.io/updated_at=2024_10_11T22_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=no-preload-390487 minikube.k8s.io/primary=true
	I1011 22:30:16.918531   77373 ops.go:34] apiserver oom_adj: -16
	I1011 22:30:17.097050   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:17.598029   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:13.486436   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:13.486750   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:18.098092   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:18.597526   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.098157   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:19.597575   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.097754   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:20.597957   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.097558   77373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 22:30:21.213123   77373 kubeadm.go:1113] duration metric: took 4.317171517s to wait for elevateKubeSystemPrivileges
	I1011 22:30:21.213168   77373 kubeadm.go:394] duration metric: took 4m58.664336163s to StartCluster
	I1011 22:30:21.213191   77373 settings.go:142] acquiring lock: {Name:mk5c033d6d574ec0605ad05a3458e00c0cde3174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.213283   77373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:30:21.215630   77373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-11611/kubeconfig: {Name:mkd4124025e2466b205c5bca1d72c2fde3c85a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 22:30:21.215852   77373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1011 22:30:21.215989   77373 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 22:30:21.216063   77373 config.go:182] Loaded profile config "no-preload-390487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:30:21.216088   77373 addons.go:69] Setting storage-provisioner=true in profile "no-preload-390487"
	I1011 22:30:21.216109   77373 addons.go:234] Setting addon storage-provisioner=true in "no-preload-390487"
	I1011 22:30:21.216102   77373 addons.go:69] Setting default-storageclass=true in profile "no-preload-390487"
	W1011 22:30:21.216118   77373 addons.go:243] addon storage-provisioner should already be in state true
	I1011 22:30:21.216128   77373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-390487"
	I1011 22:30:21.216131   77373 addons.go:69] Setting metrics-server=true in profile "no-preload-390487"
	I1011 22:30:21.216171   77373 addons.go:234] Setting addon metrics-server=true in "no-preload-390487"
	W1011 22:30:21.216182   77373 addons.go:243] addon metrics-server should already be in state true
	I1011 22:30:21.216218   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216149   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216627   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216644   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216662   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.216602   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.216737   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.217280   77373 out.go:177] * Verifying Kubernetes components...
	I1011 22:30:21.218773   77373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 22:30:21.232485   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I1011 22:30:21.232801   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I1011 22:30:21.233029   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233243   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.233615   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233642   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233762   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.233785   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.233966   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234065   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.234485   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234520   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.234611   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.234669   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.235151   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1011 22:30:21.235614   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.236082   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.236106   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.236479   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.236777   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.240463   77373 addons.go:234] Setting addon default-storageclass=true in "no-preload-390487"
	W1011 22:30:21.240483   77373 addons.go:243] addon default-storageclass should already be in state true
	I1011 22:30:21.240512   77373 host.go:66] Checking if "no-preload-390487" exists ...
	I1011 22:30:21.240874   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.240916   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.250949   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I1011 22:30:21.251469   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.251958   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.251983   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.252397   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.252586   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.253093   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1011 22:30:21.253443   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.253949   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.253966   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.254413   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.254479   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.254605   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.256241   77373 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 22:30:21.256246   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.257646   77373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 22:30:21.257651   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 22:30:21.257712   77373 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 22:30:21.257736   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.258740   77373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.258761   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 22:30:21.258779   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.259764   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1011 22:30:21.260129   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.260673   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.260697   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.261024   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.261691   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.261902   77373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 22:30:21.261949   77373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 22:30:21.262376   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.262401   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262655   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.262698   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.262901   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263233   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.263339   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.263345   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263511   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.263523   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.263700   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.263807   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.263942   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.302779   77373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1011 22:30:21.303319   77373 main.go:141] libmachine: () Calling .GetVersion
	I1011 22:30:21.303864   77373 main.go:141] libmachine: Using API Version  1
	I1011 22:30:21.303888   77373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 22:30:21.304289   77373 main.go:141] libmachine: () Calling .GetMachineName
	I1011 22:30:21.304516   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetState
	I1011 22:30:21.306544   77373 main.go:141] libmachine: (no-preload-390487) Calling .DriverName
	I1011 22:30:21.306810   77373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.306829   77373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 22:30:21.306852   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHHostname
	I1011 22:30:21.309788   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310242   77373 main.go:141] libmachine: (no-preload-390487) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7a:6d", ip: ""} in network mk-no-preload-390487: {Iface:virbr2 ExpiryTime:2024-10-11 23:24:56 +0000 UTC Type:0 Mac:52:54:00:dc:7a:6d Iaid: IPaddr:192.168.61.55 Prefix:24 Hostname:no-preload-390487 Clientid:01:52:54:00:dc:7a:6d}
	I1011 22:30:21.310268   77373 main.go:141] libmachine: (no-preload-390487) DBG | domain no-preload-390487 has defined IP address 192.168.61.55 and MAC address 52:54:00:dc:7a:6d in network mk-no-preload-390487
	I1011 22:30:21.310466   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHPort
	I1011 22:30:21.310646   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHKeyPath
	I1011 22:30:21.310786   77373 main.go:141] libmachine: (no-preload-390487) Calling .GetSSHUsername
	I1011 22:30:21.310911   77373 sshutil.go:53] new ssh client: &{IP:192.168.61.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/no-preload-390487/id_rsa Username:docker}
	I1011 22:30:21.439567   77373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 22:30:21.477421   77373 node_ready.go:35] waiting up to 6m0s for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.539701   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 22:30:21.544312   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 22:30:21.548001   77373 node_ready.go:49] node "no-preload-390487" has status "Ready":"True"
	I1011 22:30:21.548022   77373 node_ready.go:38] duration metric: took 70.568638ms for node "no-preload-390487" to be "Ready" ...
	I1011 22:30:21.548032   77373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:21.576393   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:21.585171   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 22:30:21.585197   77373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 22:30:21.681671   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 22:30:21.681698   77373 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 22:30:21.725963   77373 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:21.725988   77373 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 22:30:21.759564   77373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 22:30:22.490072   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490099   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490219   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490236   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490470   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490494   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490504   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490512   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490596   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490596   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490627   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490642   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.490653   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.490883   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.490899   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.490922   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490981   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:22.490996   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.491008   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.509939   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:22.509972   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:22.510355   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:22.510371   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:22.510421   77373 main.go:141] libmachine: (no-preload-390487) DBG | Closing plugin on server side
	I1011 22:30:23.029621   77373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.270011552s)
	I1011 22:30:23.029675   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.029691   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.029972   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.029989   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.029999   77373 main.go:141] libmachine: Making call to close driver server
	I1011 22:30:23.030008   77373 main.go:141] libmachine: (no-preload-390487) Calling .Close
	I1011 22:30:23.030228   77373 main.go:141] libmachine: Successfully made call to close driver server
	I1011 22:30:23.030242   77373 main.go:141] libmachine: Making call to close connection to plugin binary
	I1011 22:30:23.030253   77373 addons.go:475] Verifying addon metrics-server=true in "no-preload-390487"
	I1011 22:30:23.031821   77373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1011 22:30:23.033206   77373 addons.go:510] duration metric: took 1.817229636s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1011 22:30:23.583317   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.583341   77373 pod_ready.go:82] duration metric: took 2.006915507s for pod "coredns-7c65d6cfc9-cpdng" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.583350   77373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588077   77373 pod_ready.go:93] pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.588094   77373 pod_ready.go:82] duration metric: took 4.738751ms for pod "coredns-7c65d6cfc9-swwtf" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.588103   77373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592411   77373 pod_ready.go:93] pod "etcd-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:23.592429   77373 pod_ready.go:82] duration metric: took 4.320594ms for pod "etcd-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:23.592437   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:25.599226   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:28.107173   77373 pod_ready.go:103] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"False"
	I1011 22:30:29.598395   77373 pod_ready.go:93] pod "kube-apiserver-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.598422   77373 pod_ready.go:82] duration metric: took 6.005976584s for pod "kube-apiserver-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.598438   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603104   77373 pod_ready.go:93] pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.603123   77373 pod_ready.go:82] duration metric: took 4.67757ms for pod "kube-controller-manager-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.603133   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606558   77373 pod_ready.go:93] pod "kube-proxy-4g8nw" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.606574   77373 pod_ready.go:82] duration metric: took 3.433207ms for pod "kube-proxy-4g8nw" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.606582   77373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610559   77373 pod_ready.go:93] pod "kube-scheduler-no-preload-390487" in "kube-system" namespace has status "Ready":"True"
	I1011 22:30:29.610575   77373 pod_ready.go:82] duration metric: took 3.985639ms for pod "kube-scheduler-no-preload-390487" in "kube-system" namespace to be "Ready" ...
	I1011 22:30:29.610582   77373 pod_ready.go:39] duration metric: took 8.062539556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 22:30:29.610598   77373 api_server.go:52] waiting for apiserver process to appear ...
	I1011 22:30:29.610667   77373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 22:30:29.625884   77373 api_server.go:72] duration metric: took 8.409998013s to wait for apiserver process to appear ...
	I1011 22:30:29.625906   77373 api_server.go:88] waiting for apiserver healthz status ...
	I1011 22:30:29.625925   77373 api_server.go:253] Checking apiserver healthz at https://192.168.61.55:8443/healthz ...
	I1011 22:30:29.629905   77373 api_server.go:279] https://192.168.61.55:8443/healthz returned 200:
	ok
	I1011 22:30:29.631557   77373 api_server.go:141] control plane version: v1.31.1
	I1011 22:30:29.631575   77373 api_server.go:131] duration metric: took 5.661997ms to wait for apiserver health ...
	I1011 22:30:29.631583   77373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 22:30:29.637936   77373 system_pods.go:59] 9 kube-system pods found
	I1011 22:30:29.637963   77373 system_pods.go:61] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.637970   77373 system_pods.go:61] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.637974   77373 system_pods.go:61] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.637979   77373 system_pods.go:61] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.637984   77373 system_pods.go:61] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.637989   77373 system_pods.go:61] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.637997   77373 system_pods.go:61] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.638010   77373 system_pods.go:61] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.638018   77373 system_pods.go:61] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.638027   77373 system_pods.go:74] duration metric: took 6.437989ms to wait for pod list to return data ...
	I1011 22:30:29.638034   77373 default_sa.go:34] waiting for default service account to be created ...
	I1011 22:30:29.640483   77373 default_sa.go:45] found service account: "default"
	I1011 22:30:29.640499   77373 default_sa.go:55] duration metric: took 2.455351ms for default service account to be created ...
	I1011 22:30:29.640508   77373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 22:30:29.800014   77373 system_pods.go:86] 9 kube-system pods found
	I1011 22:30:29.800043   77373 system_pods.go:89] "coredns-7c65d6cfc9-cpdng" [cd94e043-da2c-49c5-84df-2ab683ebdc37] Running
	I1011 22:30:29.800049   77373 system_pods.go:89] "coredns-7c65d6cfc9-swwtf" [00984077-22c9-4c6c-a0f0-84e3a460b2dc] Running
	I1011 22:30:29.800053   77373 system_pods.go:89] "etcd-no-preload-390487" [4b44e790-9493-4835-8d73-e8468a06411b] Running
	I1011 22:30:29.800057   77373 system_pods.go:89] "kube-apiserver-no-preload-390487" [94c16977-1428-4869-b452-e8566c7a5223] Running
	I1011 22:30:29.800060   77373 system_pods.go:89] "kube-controller-manager-no-preload-390487" [4a4b7877-2c5b-47df-bd4e-b757852f3c18] Running
	I1011 22:30:29.800064   77373 system_pods.go:89] "kube-proxy-4g8nw" [d50e6c35-accf-4fbd-9f76-d7621d382fd4] Running
	I1011 22:30:29.800069   77373 system_pods.go:89] "kube-scheduler-no-preload-390487" [bf876cc4-8590-4a6f-acca-cd0b7928fc1f] Running
	I1011 22:30:29.800074   77373 system_pods.go:89] "metrics-server-6867b74b74-26g42" [faa0e007-ef61-4c3a-813e-4cea5052c564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 22:30:29.800078   77373 system_pods.go:89] "storage-provisioner" [56f955c1-7782-4612-92cd-483ddc048439] Running
	I1011 22:30:29.800086   77373 system_pods.go:126] duration metric: took 159.572896ms to wait for k8s-apps to be running ...
	I1011 22:30:29.800093   77373 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 22:30:29.800138   77373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:29.815064   77373 system_svc.go:56] duration metric: took 14.962996ms WaitForService to wait for kubelet
	I1011 22:30:29.815090   77373 kubeadm.go:582] duration metric: took 8.599206932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 22:30:29.815106   77373 node_conditions.go:102] verifying NodePressure condition ...
	I1011 22:30:29.997185   77373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1011 22:30:29.997214   77373 node_conditions.go:123] node cpu capacity is 2
	I1011 22:30:29.997224   77373 node_conditions.go:105] duration metric: took 182.114064ms to run NodePressure ...
	I1011 22:30:29.997235   77373 start.go:241] waiting for startup goroutines ...
	I1011 22:30:29.997242   77373 start.go:246] waiting for cluster config update ...
	I1011 22:30:29.997254   77373 start.go:255] writing updated cluster config ...
	I1011 22:30:29.997529   77373 ssh_runner.go:195] Run: rm -f paused
	I1011 22:30:30.044917   77373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 22:30:30.046918   77373 out.go:177] * Done! kubectl is now configured to use "no-preload-390487" cluster and "default" namespace by default
	I1011 22:30:53.486259   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:30:53.486495   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:30:53.486516   78126 kubeadm.go:310] 
	I1011 22:30:53.486567   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:30:53.486648   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:30:53.486666   78126 kubeadm.go:310] 
	I1011 22:30:53.486700   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:30:53.486730   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:30:53.486821   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:30:53.486830   78126 kubeadm.go:310] 
	I1011 22:30:53.486937   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:30:53.486977   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:30:53.487010   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:30:53.487024   78126 kubeadm.go:310] 
	I1011 22:30:53.487110   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:30:53.487191   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:30:53.487198   78126 kubeadm.go:310] 
	I1011 22:30:53.487297   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:30:53.487384   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:30:53.487458   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:30:53.487534   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:30:53.487541   78126 kubeadm.go:310] 
	I1011 22:30:53.488360   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:30:53.488439   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:30:53.488531   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1011 22:30:53.488667   78126 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1011 22:30:53.488716   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1011 22:30:53.952777   78126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 22:30:53.967422   78126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 22:30:53.978023   78126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 22:30:53.978040   78126 kubeadm.go:157] found existing configuration files:
	
	I1011 22:30:53.978084   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 22:30:53.988067   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 22:30:53.988133   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 22:30:53.998439   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 22:30:54.007839   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 22:30:54.007898   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 22:30:54.018395   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.029122   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 22:30:54.029185   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 22:30:54.038663   78126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 22:30:54.047857   78126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 22:30:54.047908   78126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 22:30:54.057703   78126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 22:30:54.128676   78126 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1011 22:30:54.129034   78126 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 22:30:54.266478   78126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 22:30:54.266571   78126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 22:30:54.266672   78126 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 22:30:54.450911   78126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 22:30:54.452928   78126 out.go:235]   - Generating certificates and keys ...
	I1011 22:30:54.453027   78126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 22:30:54.453102   78126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 22:30:54.453225   78126 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 22:30:54.453494   78126 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 22:30:54.453600   78126 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 22:30:54.453677   78126 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 22:30:54.453782   78126 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 22:30:54.453873   78126 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 22:30:54.454181   78126 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 22:30:54.454602   78126 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 22:30:54.454684   78126 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 22:30:54.454754   78126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 22:30:54.608855   78126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 22:30:54.680299   78126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 22:30:54.978324   78126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 22:30:55.264430   78126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 22:30:55.284144   78126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 22:30:55.285349   78126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 22:30:55.285416   78126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 22:30:55.429922   78126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 22:30:55.431671   78126 out.go:235]   - Booting up control plane ...
	I1011 22:30:55.431768   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 22:30:55.439681   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 22:30:55.440740   78126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 22:30:55.441431   78126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 22:30:55.452190   78126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 22:31:35.453160   78126 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1011 22:31:35.453256   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:35.453470   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:40.453793   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:40.453969   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:31:50.454345   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:31:50.454598   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:10.455392   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:10.455660   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457189   78126 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1011 22:32:50.457414   78126 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1011 22:32:50.457426   78126 kubeadm.go:310] 
	I1011 22:32:50.457525   78126 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1011 22:32:50.457602   78126 kubeadm.go:310] 		timed out waiting for the condition
	I1011 22:32:50.457612   78126 kubeadm.go:310] 
	I1011 22:32:50.457658   78126 kubeadm.go:310] 	This error is likely caused by:
	I1011 22:32:50.457704   78126 kubeadm.go:310] 		- The kubelet is not running
	I1011 22:32:50.457853   78126 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1011 22:32:50.457864   78126 kubeadm.go:310] 
	I1011 22:32:50.457993   78126 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1011 22:32:50.458044   78126 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1011 22:32:50.458110   78126 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1011 22:32:50.458130   78126 kubeadm.go:310] 
	I1011 22:32:50.458290   78126 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1011 22:32:50.458385   78126 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1011 22:32:50.458404   78126 kubeadm.go:310] 
	I1011 22:32:50.458507   78126 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1011 22:32:50.458595   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1011 22:32:50.458689   78126 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1011 22:32:50.458786   78126 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1011 22:32:50.458798   78126 kubeadm.go:310] 
	I1011 22:32:50.459707   78126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 22:32:50.459843   78126 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1011 22:32:50.459932   78126 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1011 22:32:50.459998   78126 kubeadm.go:394] duration metric: took 7m57.374144019s to StartCluster
	I1011 22:32:50.460042   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1011 22:32:50.460103   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 22:32:50.502433   78126 cri.go:89] found id: ""
	I1011 22:32:50.502459   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.502470   78126 logs.go:284] No container was found matching "kube-apiserver"
	I1011 22:32:50.502477   78126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1011 22:32:50.502537   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 22:32:50.536367   78126 cri.go:89] found id: ""
	I1011 22:32:50.536388   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.536396   78126 logs.go:284] No container was found matching "etcd"
	I1011 22:32:50.536401   78126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1011 22:32:50.536444   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 22:32:50.568028   78126 cri.go:89] found id: ""
	I1011 22:32:50.568053   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.568063   78126 logs.go:284] No container was found matching "coredns"
	I1011 22:32:50.568070   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1011 22:32:50.568126   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 22:32:50.609088   78126 cri.go:89] found id: ""
	I1011 22:32:50.609115   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.609126   78126 logs.go:284] No container was found matching "kube-scheduler"
	I1011 22:32:50.609133   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1011 22:32:50.609195   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 22:32:50.643071   78126 cri.go:89] found id: ""
	I1011 22:32:50.643099   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.643109   78126 logs.go:284] No container was found matching "kube-proxy"
	I1011 22:32:50.643116   78126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 22:32:50.643175   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 22:32:50.682752   78126 cri.go:89] found id: ""
	I1011 22:32:50.682775   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.682783   78126 logs.go:284] No container was found matching "kube-controller-manager"
	I1011 22:32:50.682788   78126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1011 22:32:50.682850   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 22:32:50.715646   78126 cri.go:89] found id: ""
	I1011 22:32:50.715671   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.715681   78126 logs.go:284] No container was found matching "kindnet"
	I1011 22:32:50.715688   78126 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 22:32:50.715751   78126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 22:32:50.748748   78126 cri.go:89] found id: ""
	I1011 22:32:50.748774   78126 logs.go:282] 0 containers: []
	W1011 22:32:50.748785   78126 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1011 22:32:50.748796   78126 logs.go:123] Gathering logs for container status ...
	I1011 22:32:50.748810   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 22:32:50.792729   78126 logs.go:123] Gathering logs for kubelet ...
	I1011 22:32:50.792758   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 22:32:50.855892   78126 logs.go:123] Gathering logs for dmesg ...
	I1011 22:32:50.855924   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 22:32:50.881322   78126 logs.go:123] Gathering logs for describe nodes ...
	I1011 22:32:50.881357   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1011 22:32:50.974517   78126 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1011 22:32:50.974540   78126 logs.go:123] Gathering logs for CRI-O ...
	I1011 22:32:50.974557   78126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1011 22:32:51.079616   78126 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1011 22:32:51.079674   78126 out.go:270] * 
	W1011 22:32:51.079731   78126 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.079745   78126 out.go:270] * 
	W1011 22:32:51.080525   78126 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 22:32:51.083625   78126 out.go:201] 
	W1011 22:32:51.085042   78126 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1011 22:32:51.085079   78126 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1011 22:32:51.085104   78126 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1011 22:32:51.086605   78126 out.go:201] 
	
	
	==> CRI-O <==
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.323189174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686642323169727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23b3ec1a-2a5c-492a-94f1-e44d9bfe608a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.323802076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=786a9c2f-68e0-4773-a3ab-dd175fca1f8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.323866676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=786a9c2f-68e0-4773-a3ab-dd175fca1f8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.323913609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=786a9c2f-68e0-4773-a3ab-dd175fca1f8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.355187500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=959744d2-4c46-4912-8eda-38425448809a name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.355288638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=959744d2-4c46-4912-8eda-38425448809a name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.356852035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d169db3e-1ac4-4a16-82ea-5c9991275d18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.357480730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686642357407358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d169db3e-1ac4-4a16-82ea-5c9991275d18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.358068944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2c2d181-6baf-4d5e-88b6-e258c27dea22 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.358156939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2c2d181-6baf-4d5e-88b6-e258c27dea22 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.358201056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2c2d181-6baf-4d5e-88b6-e258c27dea22 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.392135506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86be7e59-e0b4-4a2c-8cf8-266ebfc2d606 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.392242399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86be7e59-e0b4-4a2c-8cf8-266ebfc2d606 name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.393522430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61447bc8-5c82-4787-9be8-3f2337d68ee7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.394214959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686642394183868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61447bc8-5c82-4787-9be8-3f2337d68ee7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.394930663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50dea9e4-0efb-4a5a-842d-15081cc79b68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.394979102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50dea9e4-0efb-4a5a-842d-15081cc79b68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.395013886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=50dea9e4-0efb-4a5a-842d-15081cc79b68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.426350485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3c3a272-1e1a-4fb6-aeff-396df2b769ce name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.426439376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3c3a272-1e1a-4fb6-aeff-396df2b769ce name=/runtime.v1.RuntimeService/Version
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.428991785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1911cf2d-ccd0-4469-8649-6f9b4fa5aeda name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.429395380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728686642429371847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1911cf2d-ccd0-4469-8649-6f9b4fa5aeda name=/runtime.v1.ImageService/ImageFsInfo
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.430252458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e58b2b9-b54e-4470-87d5-9774ae532681 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.430306117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e58b2b9-b54e-4470-87d5-9774ae532681 name=/runtime.v1.RuntimeService/ListContainers
	Oct 11 22:44:02 old-k8s-version-323416 crio[634]: time="2024-10-11 22:44:02.430343300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7e58b2b9-b54e-4470-87d5-9774ae532681 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct11 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050928] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.110729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.580711] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.636937] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.157348] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.054654] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064708] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.165294] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.159768] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.272781] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.674030] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.066044] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.222707] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[Oct11 22:25] kauditd_printk_skb: 46 callbacks suppressed
	[Oct11 22:28] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Oct11 22:30] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.064434] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:44:02 up 19 min,  0 users,  load average: 0.08, 0.07, 0.02
	Linux old-k8s-version-323416 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00056d2a0, 0xc000667c80)
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: goroutine 161 [syscall]:
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: syscall.Syscall6(0xe8, 0xd, 0xc000c17b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000c17b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0002138c0, 0x0, 0x0, 0x0)
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000389a40)
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Oct 11 22:43:57 old-k8s-version-323416 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 11 22:43:57 old-k8s-version-323416 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 11 22:43:57 old-k8s-version-323416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 136.
	Oct 11 22:43:57 old-k8s-version-323416 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 11 22:43:57 old-k8s-version-323416 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6829]: I1011 22:43:57.897186    6829 server.go:416] Version: v1.20.0
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6829]: I1011 22:43:57.897423    6829 server.go:837] Client rotation is on, will bootstrap in background
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6829]: I1011 22:43:57.899208    6829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6829]: W1011 22:43:57.900109    6829 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 11 22:43:57 old-k8s-version-323416 kubelet[6829]: I1011 22:43:57.900226    6829 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 2 (225.207374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-323416" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.29s)

                                                
                                    

Test pass (250/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.71
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 14.83
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 83.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 200.7
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/PullSecret 11.52
34 TestAddons/parallel/Registry 18.17
36 TestAddons/parallel/InspektorGadget 11.71
39 TestAddons/parallel/CSI 62.7
40 TestAddons/parallel/Headlamp 19.77
41 TestAddons/parallel/CloudSpanner 6.7
42 TestAddons/parallel/LocalPath 57.57
43 TestAddons/parallel/NvidiaDevicePlugin 6.58
44 TestAddons/parallel/Yakd 12.01
47 TestCertOptions 44.76
48 TestCertExpiration 271.13
50 TestForceSystemdFlag 139.57
51 TestForceSystemdEnv 44.41
53 TestKVMDriverInstallOrUpdate 5.17
57 TestErrorSpam/setup 43.04
58 TestErrorSpam/start 0.34
59 TestErrorSpam/status 0.71
60 TestErrorSpam/pause 1.58
61 TestErrorSpam/unpause 1.7
62 TestErrorSpam/stop 5.56
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 80.61
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 55.3
69 TestFunctional/serial/KubeContext 0.04
70 TestFunctional/serial/KubectlGetPods 0.07
73 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
74 TestFunctional/serial/CacheCmd/cache/add_local 2.26
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
76 TestFunctional/serial/CacheCmd/cache/list 0.05
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
78 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
79 TestFunctional/serial/CacheCmd/cache/delete 0.09
80 TestFunctional/serial/MinikubeKubectlCmd 0.1
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
82 TestFunctional/serial/ExtraConfig 33.64
83 TestFunctional/serial/ComponentHealth 0.06
84 TestFunctional/serial/LogsCmd 1.45
85 TestFunctional/serial/LogsFileCmd 1.51
86 TestFunctional/serial/InvalidService 4.61
88 TestFunctional/parallel/ConfigCmd 0.33
89 TestFunctional/parallel/DashboardCmd 19.25
90 TestFunctional/parallel/DryRun 0.29
91 TestFunctional/parallel/InternationalLanguage 0.15
92 TestFunctional/parallel/StatusCmd 1.17
96 TestFunctional/parallel/ServiceCmdConnect 8.43
97 TestFunctional/parallel/AddonsCmd 0.13
98 TestFunctional/parallel/PersistentVolumeClaim 46.58
100 TestFunctional/parallel/SSHCmd 0.41
101 TestFunctional/parallel/CpCmd 1.3
102 TestFunctional/parallel/MySQL 35.49
103 TestFunctional/parallel/FileSync 0.2
104 TestFunctional/parallel/CertSync 1.46
108 TestFunctional/parallel/NodeLabels 0.06
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
112 TestFunctional/parallel/License 1.25
113 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.47
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
120 TestFunctional/parallel/ImageCommands/ImageBuild 5.9
121 TestFunctional/parallel/ImageCommands/Setup 2.18
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
123 TestFunctional/parallel/ProfileCmd/profile_list 0.42
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
125 TestFunctional/parallel/MountCmd/any-port 9.47
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.31
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.82
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.73
145 TestFunctional/parallel/ServiceCmd/List 0.29
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
148 TestFunctional/parallel/MountCmd/specific-port 1.73
149 TestFunctional/parallel/ServiceCmd/Format 0.31
150 TestFunctional/parallel/ServiceCmd/URL 0.28
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 203.65
159 TestMultiControlPlane/serial/DeployApp 7.46
160 TestMultiControlPlane/serial/PingHostFromPods 1.14
161 TestMultiControlPlane/serial/AddWorkerNode 57.93
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
164 TestMultiControlPlane/serial/CopyFile 12.77
170 TestMultiControlPlane/serial/DeleteSecondaryNode 16.72
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
173 TestMultiControlPlane/serial/RestartCluster 293.31
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
175 TestMultiControlPlane/serial/AddSecondaryNode 84.69
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
180 TestJSONOutput/start/Command 56.13
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.7
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.62
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 7.34
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.19
208 TestMainNoArgs 0.04
209 TestMinikubeProfile 89.37
212 TestMountStart/serial/StartWithMountFirst 27.75
213 TestMountStart/serial/VerifyMountFirst 0.35
214 TestMountStart/serial/StartWithMountSecond 29.27
215 TestMountStart/serial/VerifyMountSecond 0.36
216 TestMountStart/serial/DeleteFirst 0.67
217 TestMountStart/serial/VerifyMountPostDelete 0.37
218 TestMountStart/serial/Stop 1.27
219 TestMountStart/serial/RestartStopped 23.59
220 TestMountStart/serial/VerifyMountPostStop 0.36
223 TestMultiNode/serial/FreshStart2Nodes 112.83
224 TestMultiNode/serial/DeployApp2Nodes 6.17
225 TestMultiNode/serial/PingHostFrom2Pods 0.78
226 TestMultiNode/serial/AddNode 50.3
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.57
229 TestMultiNode/serial/CopyFile 7.16
230 TestMultiNode/serial/StopNode 2.31
231 TestMultiNode/serial/StartAfterStop 40.63
233 TestMultiNode/serial/DeleteNode 2.06
235 TestMultiNode/serial/RestartMultiNode 182.69
236 TestMultiNode/serial/ValidateNameConflict 46.45
243 TestScheduledStopUnix 112.73
247 TestRunningBinaryUpgrade 194.49
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 97.25
261 TestNetworkPlugins/group/false 2.92
265 TestStoppedBinaryUpgrade/Setup 3.22
266 TestStoppedBinaryUpgrade/Upgrade 153.14
267 TestNoKubernetes/serial/StartWithStopK8s 60.86
268 TestNoKubernetes/serial/Start 44.24
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
271 TestNoKubernetes/serial/ProfileList 1.96
280 TestPause/serial/Start 60.5
281 TestNoKubernetes/serial/Stop 1.77
282 TestNoKubernetes/serial/StartNoArgs 49.22
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
284 TestPause/serial/SecondStartNoReconfiguration 139.9
285 TestPause/serial/Pause 1.1
286 TestPause/serial/VerifyStatus 0.29
287 TestPause/serial/Unpause 1.32
288 TestPause/serial/PauseAgain 1.02
289 TestPause/serial/DeletePaused 0.85
290 TestPause/serial/VerifyDeletedResources 3.67
291 TestNetworkPlugins/group/auto/Start 113.8
292 TestNetworkPlugins/group/kindnet/Start 72.02
293 TestNetworkPlugins/group/calico/Start 78.67
294 TestNetworkPlugins/group/auto/KubeletFlags 0.27
295 TestNetworkPlugins/group/auto/NetCatPod 11.27
296 TestNetworkPlugins/group/custom-flannel/Start 95.84
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
299 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
300 TestNetworkPlugins/group/auto/DNS 0.17
301 TestNetworkPlugins/group/auto/Localhost 0.14
302 TestNetworkPlugins/group/auto/HairPin 0.16
303 TestNetworkPlugins/group/kindnet/DNS 0.17
304 TestNetworkPlugins/group/kindnet/Localhost 0.13
305 TestNetworkPlugins/group/kindnet/HairPin 0.11
306 TestNetworkPlugins/group/enable-default-cni/Start 105.21
307 TestNetworkPlugins/group/flannel/Start 109.85
308 TestNetworkPlugins/group/calico/ControllerPod 5.02
309 TestNetworkPlugins/group/calico/KubeletFlags 0.31
310 TestNetworkPlugins/group/calico/NetCatPod 13.49
311 TestNetworkPlugins/group/calico/DNS 0.15
312 TestNetworkPlugins/group/calico/Localhost 0.14
313 TestNetworkPlugins/group/calico/HairPin 0.12
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.55
316 TestNetworkPlugins/group/custom-flannel/DNS 0.18
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
319 TestNetworkPlugins/group/bridge/Start 83.36
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
329 TestNetworkPlugins/group/flannel/NetCatPod 11.27
331 TestStartStop/group/no-preload/serial/FirstStart 108.94
332 TestNetworkPlugins/group/flannel/DNS 0.16
333 TestNetworkPlugins/group/flannel/Localhost 0.13
334 TestNetworkPlugins/group/flannel/HairPin 0.12
336 TestStartStop/group/embed-certs/serial/FirstStart 96.21
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
338 TestNetworkPlugins/group/bridge/NetCatPod 12.26
339 TestNetworkPlugins/group/bridge/DNS 0.15
340 TestNetworkPlugins/group/bridge/Localhost 0.18
341 TestNetworkPlugins/group/bridge/HairPin 0.14
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.37
344 TestStartStop/group/no-preload/serial/DeployApp 10.28
345 TestStartStop/group/embed-certs/serial/DeployApp 10.26
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
357 TestStartStop/group/no-preload/serial/SecondStart 682.75
358 TestStartStop/group/embed-certs/serial/SecondStart 601
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 598.12
361 TestStartStop/group/old-k8s-version/serial/Stop 1.29
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/newest-cni/serial/FirstStart 51.98
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
376 TestStartStop/group/newest-cni/serial/Stop 10.67
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
378 TestStartStop/group/newest-cni/serial/SecondStart 37.01
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/newest-cni/serial/Pause 2.48
x
+
TestDownloadOnly/v1.20.0/json-events (28.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-404031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-404031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.711354777s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1011 20:58:28.735132   18814 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1011 20:58:28.735248   18814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-404031
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-404031: exit status 85 (59.637337ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-404031 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |          |
	|         | -p download-only-404031        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:00.063486   18826 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:00.063581   18826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:00.063586   18826 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:00.063591   18826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:00.063794   18826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	W1011 20:58:00.063974   18826 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19749-11611/.minikube/config/config.json: open /home/jenkins/minikube-integration/19749-11611/.minikube/config/config.json: no such file or directory
	I1011 20:58:00.064573   18826 out.go:352] Setting JSON to true
	I1011 20:58:00.065404   18826 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2425,"bootTime":1728677855,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 20:58:00.065504   18826 start.go:139] virtualization: kvm guest
	I1011 20:58:00.067931   18826 out.go:97] [download-only-404031] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1011 20:58:00.068047   18826 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball: no such file or directory
	I1011 20:58:00.068058   18826 notify.go:220] Checking for updates...
	I1011 20:58:00.069583   18826 out.go:169] MINIKUBE_LOCATION=19749
	I1011 20:58:00.070991   18826 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:00.072496   18826 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 20:58:00.073937   18826 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:00.075419   18826 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1011 20:58:00.077898   18826 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 20:58:00.078113   18826 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:00.180083   18826 out.go:97] Using the kvm2 driver based on user configuration
	I1011 20:58:00.180120   18826 start.go:297] selected driver: kvm2
	I1011 20:58:00.180127   18826 start.go:901] validating driver "kvm2" against <nil>
	I1011 20:58:00.180461   18826 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:00.180587   18826 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 20:58:00.195507   18826 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 20:58:00.195553   18826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:00.196205   18826 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1011 20:58:00.196419   18826 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 20:58:00.196454   18826 cni.go:84] Creating CNI manager for ""
	I1011 20:58:00.196521   18826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:58:00.196534   18826 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:00.196597   18826 start.go:340] cluster config:
	{Name:download-only-404031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-404031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:00.196838   18826 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:00.198525   18826 out.go:97] Downloading VM boot image ...
	I1011 20:58:00.198577   18826 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1011 20:58:12.901240   18826 out.go:97] Starting "download-only-404031" primary control-plane node in "download-only-404031" cluster
	I1011 20:58:12.901271   18826 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 20:58:13.009983   18826 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1011 20:58:13.010023   18826 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:13.010333   18826 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1011 20:58:13.012126   18826 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1011 20:58:13.012144   18826 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1011 20:58:13.674108   18826 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-404031 host does not exist
	  To start a cluster, run: "minikube start -p download-only-404031"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-404031
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (14.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-873204 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-873204 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.826634117s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (14.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1011 20:58:43.875806   18814 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1011 20:58:43.875843   18814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-873204
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-873204: exit status 85 (58.023835ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-404031 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | -p download-only-404031        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-404031        | download-only-404031 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | -o=json --download-only        | download-only-873204 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | -p download-only-873204        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:29.087998   19098 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:29.088103   19098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:29.088112   19098 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:29.088117   19098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:29.088306   19098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 20:58:29.088841   19098 out.go:352] Setting JSON to true
	I1011 20:58:29.089654   19098 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2454,"bootTime":1728677855,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 20:58:29.089737   19098 start.go:139] virtualization: kvm guest
	I1011 20:58:29.091992   19098 out.go:97] [download-only-873204] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 20:58:29.092128   19098 notify.go:220] Checking for updates...
	I1011 20:58:29.093463   19098 out.go:169] MINIKUBE_LOCATION=19749
	I1011 20:58:29.094591   19098 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:29.095710   19098 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 20:58:29.096976   19098 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 20:58:29.098354   19098 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1011 20:58:29.100722   19098 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 20:58:29.100919   19098 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:29.132254   19098 out.go:97] Using the kvm2 driver based on user configuration
	I1011 20:58:29.132285   19098 start.go:297] selected driver: kvm2
	I1011 20:58:29.132291   19098 start.go:901] validating driver "kvm2" against <nil>
	I1011 20:58:29.132580   19098 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:29.132645   19098 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19749-11611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1011 20:58:29.146468   19098 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1011 20:58:29.146504   19098 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:29.147023   19098 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1011 20:58:29.147171   19098 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 20:58:29.147198   19098 cni.go:84] Creating CNI manager for ""
	I1011 20:58:29.147242   19098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1011 20:58:29.147249   19098 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:29.147297   19098 start.go:340] cluster config:
	{Name:download-only-873204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-873204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:29.147385   19098 iso.go:125] acquiring lock: {Name:mk830d0cdc2f04347b7d628630eccd21199b9234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:29.148957   19098 out.go:97] Starting "download-only-873204" primary control-plane node in "download-only-873204" cluster
	I1011 20:58:29.148970   19098 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:29.773110   19098 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1011 20:58:29.773151   19098 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:29.773319   19098 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1011 20:58:29.775071   19098 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1011 20:58:29.775085   19098 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1011 20:58:30.412560   19098 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19749-11611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-873204 host does not exist
	  To start a cluster, run: "minikube start -p download-only-873204"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-873204
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1011 20:58:44.415511   18814 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-999700 --alsologtostderr --binary-mirror http://127.0.0.1:33833 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-999700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-999700
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (83.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-313531 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-313531 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.043553132s)
helpers_test.go:175: Cleaning up "offline-crio-313531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-313531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-313531: (1.567470737s)
--- PASS: TestOffline (83.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-335640
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-335640: exit status 85 (49.562402ms)

                                                
                                                
-- stdout --
	* Profile "addons-335640" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335640"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-335640
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-335640: exit status 85 (50.456343ms)

                                                
                                                
-- stdout --
	* Profile "addons-335640" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335640"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (200.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-335640 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-335640 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m20.696163257s)
--- PASS: TestAddons/Setup (200.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-335640 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-335640 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-335640 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-335640 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3caf89f2-1c8a-48d3-bedc-9796d7b20ff7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3caf89f2-1c8a-48d3-bedc-9796d7b20ff7] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 11.003798734s
addons_test.go:633: (dbg) Run:  kubectl --context addons-335640 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-335640 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-335640 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.614868ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fscdh" [b7eae652-7687-4daf-bcb5-ba3501d88f5b] Running
I1011 21:02:26.055079   18814 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1011 21:02:26.055099   18814 kapi.go:107] duration metric: took 26.972559ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003871193s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9bpbj" [ce628b0d-73e1-4fa3-a071-c9091c1ae2ba] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006650729s
addons_test.go:331: (dbg) Run:  kubectl --context addons-335640 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-335640 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-335640 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.143537503s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 ip
2024/10/11 21:02:43 [DEBUG] GET http://192.168.39.109:5000
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bfj46" [8f6061c8-2f82-4617-9b7f-c43b66a25288] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004195402s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable inspektor-gadget --alsologtostderr -v=1: (5.706995272s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 26.980934ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-335640 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-335640 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [37574bc1-c7a5-4d99-98ba-291534e9e72c] Pending
helpers_test.go:344: "task-pv-pod" [37574bc1-c7a5-4d99-98ba-291534e9e72c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [37574bc1-c7a5-4d99-98ba-291534e9e72c] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.003900043s
addons_test.go:511: (dbg) Run:  kubectl --context addons-335640 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-335640 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-335640 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-335640 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-335640 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-335640 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-335640 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [75f37b43-e15a-41da-b663-34daf95a8e16] Pending
helpers_test.go:344: "task-pv-pod-restore" [75f37b43-e15a-41da-b663-34daf95a8e16] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [75f37b43-e15a-41da-b663-34daf95a8e16] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003755365s
addons_test.go:553: (dbg) Run:  kubectl --context addons-335640 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-335640 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-335640 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable volumesnapshots --alsologtostderr -v=1: (1.035369515s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.790708802s)
--- PASS: TestAddons/parallel/CSI (62.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-335640 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-d8wb2" [62af2220-c0c7-479a-9812-aea83546a866] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-d8wb2" [62af2220-c0c7-479a-9812-aea83546a866] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-d8wb2" [62af2220-c0c7-479a-9812-aea83546a866] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.0036214s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable headlamp --alsologtostderr -v=1: (5.835884948s)
--- PASS: TestAddons/parallel/Headlamp (19.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-xdrgp" [30274847-2d0c-47b6-875c-b32d40f9e34d] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003778139s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-335640 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-335640 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7a34c1a4-a064-47f7-ac9f-55f2dc8ea560] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7a34c1a4-a064-47f7-ac9f-55f2dc8ea560] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7a34c1a4-a064-47f7-ac9f-55f2dc8ea560] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006284542s
addons_test.go:902: (dbg) Run:  kubectl --context addons-335640 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 ssh "cat /opt/local-path-provisioner/pvc-5e03d062-901b-4d87-ab60-2b2a39b9acde_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-335640 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-335640 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.792269366s)
--- PASS: TestAddons/parallel/LocalPath (57.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I1011 21:02:26.028151   18814 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4rwwd" [fdff7711-2b34-4674-b560-4769911e0b24] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003605915s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-w8qh6" [3c08a6bb-a723-40e1-83c0-b95e0f0b18f5] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.050884339s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-335640 addons disable yakd --alsologtostderr -v=1: (5.954791062s)
--- PASS: TestAddons/parallel/Yakd (12.01s)

                                                
                                    
x
+
TestCertOptions (44.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-413599 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-413599 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.496005282s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-413599 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-413599 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-413599 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-413599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-413599
--- PASS: TestCertOptions (44.76s)

                                                
                                    
x
+
TestCertExpiration (271.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993898 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993898 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (41.002576663s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993898 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993898 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (49.058813562s)
helpers_test.go:175: Cleaning up "cert-expiration-993898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-993898
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-993898: (1.071678751s)
--- PASS: TestCertExpiration (271.13s)

                                                
                                    
x
+
TestForceSystemdFlag (139.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-906123 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-906123 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m18.507019177s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-906123 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-906123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-906123
--- PASS: TestForceSystemdFlag (139.57s)

                                                
                                    
x
+
TestForceSystemdEnv (44.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-326657 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-326657 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.439924925s)
helpers_test.go:175: Cleaning up "force-systemd-env-326657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-326657
--- PASS: TestForceSystemdEnv (44.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1011 22:09:55.660006   18814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1011 22:09:55.660156   18814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1011 22:09:55.689223   18814 install.go:62] docker-machine-driver-kvm2: exit status 1
W1011 22:09:55.689598   18814 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1011 22:09:55.689670   18814 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4058946765/001/docker-machine-driver-kvm2
I1011 22:09:55.928576   18814 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4058946765/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52face0 0x52face0 0x52face0 0x52face0 0x52face0 0x52face0 0x52face0] Decompressors:map[bz2:0xc000a80e60 gz:0xc000a80e68 tar:0xc000a80e10 tar.bz2:0xc000a80e20 tar.gz:0xc000a80e30 tar.xz:0xc000a80e40 tar.zst:0xc000a80e50 tbz2:0xc000a80e20 tgz:0xc000a80e30 txz:0xc000a80e40 tzst:0xc000a80e50 xz:0xc000a80e70 zip:0xc000a80e80 zst:0xc000a80e78] Getters:map[file:0xc001bd2520 http:0xc000726190 https:0xc000726370] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1011 22:09:55.928623   18814 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4058946765/001/docker-machine-driver-kvm2
I1011 22:09:58.814434   18814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1011 22:09:58.814533   18814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1011 22:09:58.841469   18814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1011 22:09:58.841509   18814 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1011 22:09:58.841574   18814 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1011 22:09:58.841606   18814 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4058946765/002/docker-machine-driver-kvm2
I1011 22:09:58.898162   18814 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4058946765/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52face0 0x52face0 0x52face0 0x52face0 0x52face0 0x52face0 0x52face0] Decompressors:map[bz2:0xc000a80e60 gz:0xc000a80e68 tar:0xc000a80e10 tar.bz2:0xc000a80e20 tar.gz:0xc000a80e30 tar.xz:0xc000a80e40 tar.zst:0xc000a80e50 tbz2:0xc000a80e20 tgz:0xc000a80e30 txz:0xc000a80e40 tzst:0xc000a80e50 xz:0xc000a80e70 zip:0xc000a80e80 zst:0xc000a80e78] Getters:map[file:0xc001bd2dd0 http:0xc000727680 https:0xc0007276d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1011 22:09:58.898224   18814 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4058946765/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.17s)

                                                
                                    
x
+
TestErrorSpam/setup (43.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-171218 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-171218 --driver=kvm2  --container-runtime=crio
E1011 21:12:06.388703   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:06.395150   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:06.406642   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:06.428138   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:06.469628   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:06.551069   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:06.712544   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:07.034223   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:07.675925   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-171218 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-171218 --driver=kvm2  --container-runtime=crio: (43.04097332s)
--- PASS: TestErrorSpam/setup (43.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 start --dry-run
E1011 21:12:08.957691   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 unpause
E1011 21:12:11.519076   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 stop: (2.315781372s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 stop
E1011 21:12:16.641302   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 stop: (1.199120042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-171218 --log_dir /tmp/nospam-171218 stop: (2.045398433s)
--- PASS: TestErrorSpam/stop (5.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19749-11611/.minikube/files/etc/test/nested/copy/18814/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-297998 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1011 21:12:26.883565   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:12:47.365915   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:13:28.329047   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-297998 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.61246676s)
--- PASS: TestFunctional/serial/StartWithProxy (80.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1011 21:13:39.740697   18814 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-297998 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-297998 --alsologtostderr -v=8: (55.300275441s)
functional_test.go:663: soft start took 55.300987151s for "functional-297998" cluster.
I1011 21:14:35.041314   18814 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (55.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-297998 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 cache add registry.k8s.io/pause:3.1: (1.087586031s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 cache add registry.k8s.io/pause:3.3: (1.209114586s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 cache add registry.k8s.io/pause:latest: (1.064766853s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-297998 /tmp/TestFunctionalserialCacheCmdcacheadd_local1341267878/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cache add minikube-local-cache-test:functional-297998
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 cache add minikube-local-cache-test:functional-297998: (1.917754781s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cache delete minikube-local-cache-test:functional-297998
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-297998
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.462205ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 kubectl -- --context functional-297998 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-297998 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-297998 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1011 21:14:50.250892   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-297998 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.637141391s)
functional_test.go:761: restart took 33.637244581s for "functional-297998" cluster.
I1011 21:15:16.676755   18814 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (33.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-297998 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 logs: (1.444611425s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 logs --file /tmp/TestFunctionalserialLogsFileCmd927992329/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 logs --file /tmp/TestFunctionalserialLogsFileCmd927992329/001/logs.txt: (1.510351623s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-297998 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-297998
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-297998: exit status 115 (267.814809ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.40:32121 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-297998 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-297998 delete -f testdata/invalidsvc.yaml: (1.154485759s)
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 config get cpus: exit status 14 (59.879911ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 config get cpus: exit status 14 (44.864997ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-297998 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-297998 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28996: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-297998 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-297998 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.421361ms)

                                                
                                                
-- stdout --
	* [functional-297998] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:15:35.443030   28163 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:15:35.443162   28163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:15:35.443171   28163 out.go:358] Setting ErrFile to fd 2...
	I1011 21:15:35.443177   28163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:15:35.443360   28163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:15:35.443869   28163 out.go:352] Setting JSON to false
	I1011 21:15:35.444762   28163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3480,"bootTime":1728677855,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:15:35.444858   28163 start.go:139] virtualization: kvm guest
	I1011 21:15:35.447042   28163 out.go:177] * [functional-297998] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 21:15:35.448509   28163 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:15:35.448511   28163 notify.go:220] Checking for updates...
	I1011 21:15:35.451323   28163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:15:35.452795   28163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:15:35.454124   28163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:15:35.455543   28163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:15:35.456966   28163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:15:35.458502   28163 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:15:35.458917   28163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:15:35.458988   28163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:15:35.473693   28163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I1011 21:15:35.474096   28163 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:15:35.474670   28163 main.go:141] libmachine: Using API Version  1
	I1011 21:15:35.474697   28163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:15:35.475056   28163 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:15:35.475251   28163 main.go:141] libmachine: (functional-297998) Calling .DriverName
	I1011 21:15:35.475493   28163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:15:35.475937   28163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:15:35.475992   28163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:15:35.492150   28163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I1011 21:15:35.492670   28163 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:15:35.493192   28163 main.go:141] libmachine: Using API Version  1
	I1011 21:15:35.493227   28163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:15:35.493650   28163 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:15:35.493871   28163 main.go:141] libmachine: (functional-297998) Calling .DriverName
	I1011 21:15:35.531671   28163 out.go:177] * Using the kvm2 driver based on existing profile
	I1011 21:15:35.532981   28163 start.go:297] selected driver: kvm2
	I1011 21:15:35.533000   28163 start.go:901] validating driver "kvm2" against &{Name:functional-297998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-297998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:15:35.533110   28163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:15:35.535392   28163 out.go:201] 
	W1011 21:15:35.536699   28163 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1011 21:15:35.538936   28163 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-297998 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-297998 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-297998 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.779839ms)

                                                
                                                
-- stdout --
	* [functional-297998] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:15:35.734771   28267 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:15:35.734903   28267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:15:35.734915   28267 out.go:358] Setting ErrFile to fd 2...
	I1011 21:15:35.734922   28267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:15:35.735321   28267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:15:35.735889   28267 out.go:352] Setting JSON to false
	I1011 21:15:35.736846   28267 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3481,"bootTime":1728677855,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 21:15:35.736917   28267 start.go:139] virtualization: kvm guest
	I1011 21:15:35.739118   28267 out.go:177] * [functional-297998] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1011 21:15:35.740589   28267 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:15:35.740657   28267 notify.go:220] Checking for updates...
	I1011 21:15:35.743183   28267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:15:35.744566   28267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 21:15:35.745941   28267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 21:15:35.747185   28267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 21:15:35.748395   28267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:15:35.750081   28267 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:15:35.750514   28267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:15:35.750590   28267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:15:35.770501   28267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I1011 21:15:35.771019   28267 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:15:35.771537   28267 main.go:141] libmachine: Using API Version  1
	I1011 21:15:35.771561   28267 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:15:35.771952   28267 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:15:35.772204   28267 main.go:141] libmachine: (functional-297998) Calling .DriverName
	I1011 21:15:35.772500   28267 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:15:35.772924   28267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:15:35.772976   28267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:15:35.789545   28267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I1011 21:15:35.789996   28267 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:15:35.790522   28267 main.go:141] libmachine: Using API Version  1
	I1011 21:15:35.790539   28267 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:15:35.790882   28267 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:15:35.791313   28267 main.go:141] libmachine: (functional-297998) Calling .DriverName
	I1011 21:15:35.825556   28267 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1011 21:15:35.827016   28267 start.go:297] selected driver: kvm2
	I1011 21:15:35.827045   28267 start.go:901] validating driver "kvm2" against &{Name:functional-297998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-297998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:15:35.827193   28267 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:15:35.829698   28267 out.go:201] 
	W1011 21:15:35.831305   28267 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1011 21:15:35.832811   28267 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-297998 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-297998 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jkq78" [cdb14e7f-2a58-4e52-bbfb-f3fde7cc3811] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jkq78" [cdb14e7f-2a58-4e52-bbfb-f3fde7cc3811] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004552785s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.40:31083
functional_test.go:1675: http://192.168.39.40:31083: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-jkq78

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.40:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.40:31083
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [15c56895-f2b0-44df-9ffe-99e102853e22] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004086087s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-297998 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-297998 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-297998 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-297998 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de07eb4b-2eda-425c-a03a-5f5915363d8d] Pending
helpers_test.go:344: "sp-pod" [de07eb4b-2eda-425c-a03a-5f5915363d8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de07eb4b-2eda-425c-a03a-5f5915363d8d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.024553515s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-297998 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-297998 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-297998 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6ba9d049-a6b0-414e-972e-3ea96cb1fd4a] Pending
helpers_test.go:344: "sp-pod" [6ba9d049-a6b0-414e-972e-3ea96cb1fd4a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/10/11 21:15:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [6ba9d049-a6b0-414e-972e-3ea96cb1fd4a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.00372567s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-297998 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh -n functional-297998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cp functional-297998:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd889558685/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh -n functional-297998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh -n functional-297998 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-297998 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-xgwxw" [71bcd9bd-02c8-4fdd-85d3-765562159ba5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-xgwxw" [71bcd9bd-02c8-4fdd-85d3-765562159ba5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 34.003848901s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-297998 exec mysql-6cdb49bbb-xgwxw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-297998 exec mysql-6cdb49bbb-xgwxw -- mysql -ppassword -e "show databases;": exit status 1 (123.395859ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1011 21:16:13.741948   18814 retry.go:31] will retry after 979.561993ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-297998 exec mysql-6cdb49bbb-xgwxw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18814/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /etc/test/nested/copy/18814/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18814.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /etc/ssl/certs/18814.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18814.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /usr/share/ca-certificates/18814.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/188142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /etc/ssl/certs/188142.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/188142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /usr/share/ca-certificates/188142.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-297998 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh "sudo systemctl is-active docker": exit status 1 (229.287034ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh "sudo systemctl is-active containerd": exit status 1 (213.616342ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.251038384s)
--- PASS: TestFunctional/parallel/License (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-297998 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-297998 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-rdpgl" [27e32d07-639a-46bf-887c-556a2482a090] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-rdpgl" [27e32d07-639a-46bf-887c-556a2482a090] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004441811s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-297998 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-297998
localhost/kicbase/echo-server:functional-297998
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-297998 image ls --format short --alsologtostderr:
I1011 21:15:45.214165   29166 out.go:345] Setting OutFile to fd 1 ...
I1011 21:15:45.214302   29166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:45.214312   29166 out.go:358] Setting ErrFile to fd 2...
I1011 21:15:45.214319   29166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:45.214521   29166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
I1011 21:15:45.215140   29166 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:45.215256   29166 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:45.215664   29166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:45.215714   29166 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:45.230435   29166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41415
I1011 21:15:45.230919   29166 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:45.231472   29166 main.go:141] libmachine: Using API Version  1
I1011 21:15:45.231502   29166 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:45.231809   29166 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:45.231973   29166 main.go:141] libmachine: (functional-297998) Calling .GetState
I1011 21:15:45.233948   29166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:45.233992   29166 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:45.250483   29166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
I1011 21:15:45.250955   29166 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:45.251382   29166 main.go:141] libmachine: Using API Version  1
I1011 21:15:45.251403   29166 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:45.251758   29166 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:45.251973   29166 main.go:141] libmachine: (functional-297998) Calling .DriverName
I1011 21:15:45.252150   29166 ssh_runner.go:195] Run: systemctl --version
I1011 21:15:45.252176   29166 main.go:141] libmachine: (functional-297998) Calling .GetSSHHostname
I1011 21:15:45.254873   29166 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:45.255236   29166 main.go:141] libmachine: (functional-297998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:59:76", ip: ""} in network mk-functional-297998: {Iface:virbr1 ExpiryTime:2024-10-11 22:12:34 +0000 UTC Type:0 Mac:52:54:00:93:59:76 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-297998 Clientid:01:52:54:00:93:59:76}
I1011 21:15:45.255269   29166 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:45.255429   29166 main.go:141] libmachine: (functional-297998) Calling .GetSSHPort
I1011 21:15:45.255597   29166 main.go:141] libmachine: (functional-297998) Calling .GetSSHKeyPath
I1011 21:15:45.255744   29166 main.go:141] libmachine: (functional-297998) Calling .GetSSHUsername
I1011 21:15:45.255842   29166 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/functional-297998/id_rsa Username:docker}
I1011 21:15:45.333267   29166 ssh_runner.go:195] Run: sudo crictl images --output json
I1011 21:15:45.372212   29166 main.go:141] libmachine: Making call to close driver server
I1011 21:15:45.372230   29166 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:45.372573   29166 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:45.372605   29166 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:45.372626   29166 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:45.372638   29166 main.go:141] libmachine: Making call to close driver server
I1011 21:15:45.372646   29166 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:45.372867   29166 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:45.372873   29166 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:45.372909   29166 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-297998 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-297998  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/my-image                      | functional-297998  | cc06bae260a76 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| localhost/minikube-local-cache-test     | functional-297998  | e428a2f225386 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-297998 image ls --format table --alsologtostderr:
I1011 21:15:51.751790   29362 out.go:345] Setting OutFile to fd 1 ...
I1011 21:15:51.751913   29362 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:51.751922   29362 out.go:358] Setting ErrFile to fd 2...
I1011 21:15:51.751927   29362 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:51.752110   29362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
I1011 21:15:51.752696   29362 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:51.752791   29362 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:51.753129   29362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:51.753167   29362 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:51.767708   29362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
I1011 21:15:51.768164   29362 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:51.768741   29362 main.go:141] libmachine: Using API Version  1
I1011 21:15:51.768769   29362 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:51.769129   29362 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:51.769335   29362 main.go:141] libmachine: (functional-297998) Calling .GetState
I1011 21:15:51.771163   29362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:51.771206   29362 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:51.788181   29362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
I1011 21:15:51.788568   29362 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:51.789017   29362 main.go:141] libmachine: Using API Version  1
I1011 21:15:51.789038   29362 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:51.789325   29362 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:51.789491   29362 main.go:141] libmachine: (functional-297998) Calling .DriverName
I1011 21:15:51.789686   29362 ssh_runner.go:195] Run: systemctl --version
I1011 21:15:51.789707   29362 main.go:141] libmachine: (functional-297998) Calling .GetSSHHostname
I1011 21:15:51.792084   29362 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:51.792473   29362 main.go:141] libmachine: (functional-297998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:59:76", ip: ""} in network mk-functional-297998: {Iface:virbr1 ExpiryTime:2024-10-11 22:12:34 +0000 UTC Type:0 Mac:52:54:00:93:59:76 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-297998 Clientid:01:52:54:00:93:59:76}
I1011 21:15:51.792498   29362 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:51.792623   29362 main.go:141] libmachine: (functional-297998) Calling .GetSSHPort
I1011 21:15:51.792771   29362 main.go:141] libmachine: (functional-297998) Calling .GetSSHKeyPath
I1011 21:15:51.792917   29362 main.go:141] libmachine: (functional-297998) Calling .GetSSHUsername
I1011 21:15:51.793045   29362 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/functional-297998/id_rsa Username:docker}
I1011 21:15:51.869724   29362 ssh_runner.go:195] Run: sudo crictl images --output json
I1011 21:15:51.909715   29362 main.go:141] libmachine: Making call to close driver server
I1011 21:15:51.909730   29362 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:51.909993   29362 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:51.910012   29362 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:51.910013   29362 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:51.910031   29362 main.go:141] libmachine: Making call to close driver server
I1011 21:15:51.910044   29362 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:51.910258   29362 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:51.910267   29362 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:51.910280   29362 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-297998 image ls --format json --alsologtostderr:
[{"id":"b07804f00a08b29aea717d1a0d5f7889252d862ab2b6ec03367b516b7d96445d","repoDigests":["docker.io/library/75fb244e112a2279169ce6b48a991a9304ef94ad6ec1dfc6a307cf4ddc275cef-tmp@sha256:748223407e480e25940c34e482f6d41d082b943263bd789a9b74431a8742b535"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e428a2f2253866be060a78ae66f473707df4deacf6ea46a7edc448361c29ef5c","repoDigests":["localhost/minikube-local-cache-test@sha256:b3ab1dfbde8346c337720c3b05acc0a40cc2559d97bd801e80b3fbfc0e97bee9"],"repoTags":["localhost/minikube-local-cache-test:functional-297998"],"size":"3328"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDiges
ts":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.i
o/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-297998"],"size":"4943877"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repo
Digests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5
e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha2
56:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cc06bae260a76f518d5e78522cf2cb7fcb0672eb4eaf60143303f8f3a2b48b23","repoDigests":["localhost/my-image@sha256:bcb3172340254a26a7da16a90264993df91877e2449282f36fdddddf90cd96b4"]
,"repoTags":["localhost/my-image:functional-297998"],"size":"1468600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-297998 image ls --format json --alsologtostderr:
I1011 21:15:51.537381   29339 out.go:345] Setting OutFile to fd 1 ...
I1011 21:15:51.537620   29339 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:51.537628   29339 out.go:358] Setting ErrFile to fd 2...
I1011 21:15:51.537632   29339 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:51.537818   29339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
I1011 21:15:51.538356   29339 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:51.538448   29339 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:51.538868   29339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:51.538945   29339 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:51.553764   29339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39803
I1011 21:15:51.554359   29339 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:51.554953   29339 main.go:141] libmachine: Using API Version  1
I1011 21:15:51.554981   29339 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:51.555289   29339 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:51.555476   29339 main.go:141] libmachine: (functional-297998) Calling .GetState
I1011 21:15:51.557116   29339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:51.557149   29339 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:51.572007   29339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
I1011 21:15:51.572398   29339 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:51.572810   29339 main.go:141] libmachine: Using API Version  1
I1011 21:15:51.572830   29339 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:51.573212   29339 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:51.573411   29339 main.go:141] libmachine: (functional-297998) Calling .DriverName
I1011 21:15:51.573600   29339 ssh_runner.go:195] Run: systemctl --version
I1011 21:15:51.573628   29339 main.go:141] libmachine: (functional-297998) Calling .GetSSHHostname
I1011 21:15:51.576145   29339 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:51.576471   29339 main.go:141] libmachine: (functional-297998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:59:76", ip: ""} in network mk-functional-297998: {Iface:virbr1 ExpiryTime:2024-10-11 22:12:34 +0000 UTC Type:0 Mac:52:54:00:93:59:76 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-297998 Clientid:01:52:54:00:93:59:76}
I1011 21:15:51.576501   29339 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:51.576579   29339 main.go:141] libmachine: (functional-297998) Calling .GetSSHPort
I1011 21:15:51.576743   29339 main.go:141] libmachine: (functional-297998) Calling .GetSSHKeyPath
I1011 21:15:51.576861   29339 main.go:141] libmachine: (functional-297998) Calling .GetSSHUsername
I1011 21:15:51.576970   29339 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/functional-297998/id_rsa Username:docker}
I1011 21:15:51.657773   29339 ssh_runner.go:195] Run: sudo crictl images --output json
I1011 21:15:51.703216   29339 main.go:141] libmachine: Making call to close driver server
I1011 21:15:51.703228   29339 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:51.703476   29339 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:51.703520   29339 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:51.703557   29339 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:51.703573   29339 main.go:141] libmachine: Making call to close driver server
I1011 21:15:51.703584   29339 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:51.703788   29339 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:51.703807   29339 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:51.703806   29339 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-297998 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e428a2f2253866be060a78ae66f473707df4deacf6ea46a7edc448361c29ef5c
repoDigests:
- localhost/minikube-local-cache-test@sha256:b3ab1dfbde8346c337720c3b05acc0a40cc2559d97bd801e80b3fbfc0e97bee9
repoTags:
- localhost/minikube-local-cache-test:functional-297998
size: "3328"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-297998
size: "4943877"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-297998 image ls --format yaml --alsologtostderr:
I1011 21:15:45.421373   29190 out.go:345] Setting OutFile to fd 1 ...
I1011 21:15:45.421484   29190 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:45.421495   29190 out.go:358] Setting ErrFile to fd 2...
I1011 21:15:45.421500   29190 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:45.421710   29190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
I1011 21:15:45.422361   29190 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:45.422492   29190 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:45.422926   29190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:45.422973   29190 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:45.437892   29190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
I1011 21:15:45.438364   29190 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:45.438993   29190 main.go:141] libmachine: Using API Version  1
I1011 21:15:45.439019   29190 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:45.439319   29190 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:45.439494   29190 main.go:141] libmachine: (functional-297998) Calling .GetState
I1011 21:15:45.441356   29190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:45.441411   29190 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:45.457222   29190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
I1011 21:15:45.457671   29190 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:45.458147   29190 main.go:141] libmachine: Using API Version  1
I1011 21:15:45.458169   29190 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:45.458473   29190 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:45.458641   29190 main.go:141] libmachine: (functional-297998) Calling .DriverName
I1011 21:15:45.458822   29190 ssh_runner.go:195] Run: systemctl --version
I1011 21:15:45.458855   29190 main.go:141] libmachine: (functional-297998) Calling .GetSSHHostname
I1011 21:15:45.461386   29190 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:45.461817   29190 main.go:141] libmachine: (functional-297998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:59:76", ip: ""} in network mk-functional-297998: {Iface:virbr1 ExpiryTime:2024-10-11 22:12:34 +0000 UTC Type:0 Mac:52:54:00:93:59:76 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-297998 Clientid:01:52:54:00:93:59:76}
I1011 21:15:45.461849   29190 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:45.461999   29190 main.go:141] libmachine: (functional-297998) Calling .GetSSHPort
I1011 21:15:45.462136   29190 main.go:141] libmachine: (functional-297998) Calling .GetSSHKeyPath
I1011 21:15:45.462262   29190 main.go:141] libmachine: (functional-297998) Calling .GetSSHUsername
I1011 21:15:45.462370   29190 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/functional-297998/id_rsa Username:docker}
I1011 21:15:45.541236   29190 ssh_runner.go:195] Run: sudo crictl images --output json
I1011 21:15:45.584256   29190 main.go:141] libmachine: Making call to close driver server
I1011 21:15:45.584273   29190 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:45.584574   29190 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:45.584588   29190 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:45.584623   29190 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:45.584638   29190 main.go:141] libmachine: Making call to close driver server
I1011 21:15:45.584645   29190 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:45.584852   29190 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:45.584874   29190 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:45.584878   29190 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh pgrep buildkitd: exit status 1 (188.055375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image build -t localhost/my-image:functional-297998 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 image build -t localhost/my-image:functional-297998 testdata/build --alsologtostderr: (5.479298084s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-297998 image build -t localhost/my-image:functional-297998 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b07804f00a0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-297998
--> cc06bae260a
Successfully tagged localhost/my-image:functional-297998
cc06bae260a76f518d5e78522cf2cb7fcb0672eb4eaf60143303f8f3a2b48b23
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-297998 image build -t localhost/my-image:functional-297998 testdata/build --alsologtostderr:
I1011 21:15:45.822568   29243 out.go:345] Setting OutFile to fd 1 ...
I1011 21:15:45.822749   29243 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:45.822762   29243 out.go:358] Setting ErrFile to fd 2...
I1011 21:15:45.822767   29243 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:15:45.822983   29243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
I1011 21:15:45.823776   29243 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:45.824379   29243 config.go:182] Loaded profile config "functional-297998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1011 21:15:45.824751   29243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:45.824787   29243 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:45.840902   29243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
I1011 21:15:45.841529   29243 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:45.842270   29243 main.go:141] libmachine: Using API Version  1
I1011 21:15:45.842298   29243 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:45.842660   29243 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:45.842907   29243 main.go:141] libmachine: (functional-297998) Calling .GetState
I1011 21:15:45.845082   29243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1011 21:15:45.845124   29243 main.go:141] libmachine: Launching plugin server for driver kvm2
I1011 21:15:45.860413   29243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
I1011 21:15:45.860909   29243 main.go:141] libmachine: () Calling .GetVersion
I1011 21:15:45.861334   29243 main.go:141] libmachine: Using API Version  1
I1011 21:15:45.861355   29243 main.go:141] libmachine: () Calling .SetConfigRaw
I1011 21:15:45.861640   29243 main.go:141] libmachine: () Calling .GetMachineName
I1011 21:15:45.861823   29243 main.go:141] libmachine: (functional-297998) Calling .DriverName
I1011 21:15:45.862001   29243 ssh_runner.go:195] Run: systemctl --version
I1011 21:15:45.862024   29243 main.go:141] libmachine: (functional-297998) Calling .GetSSHHostname
I1011 21:15:45.864477   29243 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:45.864768   29243 main.go:141] libmachine: (functional-297998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:59:76", ip: ""} in network mk-functional-297998: {Iface:virbr1 ExpiryTime:2024-10-11 22:12:34 +0000 UTC Type:0 Mac:52:54:00:93:59:76 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-297998 Clientid:01:52:54:00:93:59:76}
I1011 21:15:45.864800   29243 main.go:141] libmachine: (functional-297998) DBG | domain functional-297998 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:59:76 in network mk-functional-297998
I1011 21:15:45.864917   29243 main.go:141] libmachine: (functional-297998) Calling .GetSSHPort
I1011 21:15:45.865068   29243 main.go:141] libmachine: (functional-297998) Calling .GetSSHKeyPath
I1011 21:15:45.865202   29243 main.go:141] libmachine: (functional-297998) Calling .GetSSHUsername
I1011 21:15:45.865327   29243 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/functional-297998/id_rsa Username:docker}
I1011 21:15:45.945216   29243 build_images.go:161] Building image from path: /tmp/build.2277398748.tar
I1011 21:15:45.945286   29243 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1011 21:15:45.959738   29243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2277398748.tar
I1011 21:15:45.965845   29243 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2277398748.tar: stat -c "%s %y" /var/lib/minikube/build/build.2277398748.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2277398748.tar': No such file or directory
I1011 21:15:45.965891   29243 ssh_runner.go:362] scp /tmp/build.2277398748.tar --> /var/lib/minikube/build/build.2277398748.tar (3072 bytes)
I1011 21:15:45.995888   29243 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2277398748
I1011 21:15:46.007748   29243 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2277398748 -xf /var/lib/minikube/build/build.2277398748.tar
I1011 21:15:46.018706   29243 crio.go:315] Building image: /var/lib/minikube/build/build.2277398748
I1011 21:15:46.018777   29243 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-297998 /var/lib/minikube/build/build.2277398748 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1011 21:15:51.219304   29243 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-297998 /var/lib/minikube/build/build.2277398748 --cgroup-manager=cgroupfs: (5.200501964s)
I1011 21:15:51.219373   29243 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2277398748
I1011 21:15:51.231679   29243 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2277398748.tar
I1011 21:15:51.249517   29243 build_images.go:217] Built localhost/my-image:functional-297998 from /tmp/build.2277398748.tar
I1011 21:15:51.249553   29243 build_images.go:133] succeeded building to: functional-297998
I1011 21:15:51.249560   29243 build_images.go:134] failed building to: 
I1011 21:15:51.249630   29243 main.go:141] libmachine: Making call to close driver server
I1011 21:15:51.249648   29243 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:51.249890   29243 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:51.249911   29243 main.go:141] libmachine: Making call to close connection to plugin binary
I1011 21:15:51.249916   29243 main.go:141] libmachine: (functional-297998) DBG | Closing plugin on server side
I1011 21:15:51.249919   29243 main.go:141] libmachine: Making call to close driver server
I1011 21:15:51.249935   29243 main.go:141] libmachine: (functional-297998) Calling .Close
I1011 21:15:51.250139   29243 main.go:141] libmachine: Successfully made call to close driver server
I1011 21:15:51.250152   29243 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.160088391s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-297998
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "358.858957ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.141068ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "428.421444ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.724929ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdany-port542082982/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728681326777830689" to /tmp/TestFunctionalparallelMountCmdany-port542082982/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728681326777830689" to /tmp/TestFunctionalparallelMountCmdany-port542082982/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728681326777830689" to /tmp/TestFunctionalparallelMountCmdany-port542082982/001/test-1728681326777830689
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.348627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:15:27.048484   18814 retry.go:31] will retry after 296.36921ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 11 21:15 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 11 21:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 11 21:15 test-1728681326777830689
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh cat /mount-9p/test-1728681326777830689
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-297998 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9257da7e-1cb4-4e8c-a030-a2995f450873] Pending
helpers_test.go:344: "busybox-mount" [9257da7e-1cb4-4e8c-a030-a2995f450873] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9257da7e-1cb4-4e8c-a030-a2995f450873] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9257da7e-1cb4-4e8c-a030-a2995f450873] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004618863s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-297998 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdany-port542082982/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image load --daemon kicbase/echo-server:functional-297998 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-297998 image load --daemon kicbase/echo-server:functional-297998 --alsologtostderr: (2.101904441s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image load --daemon kicbase/echo-server:functional-297998 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-297998
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image load --daemon kicbase/echo-server:functional-297998 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image save kicbase/echo-server:functional-297998 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image rm kicbase/echo-server:functional-297998 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-297998
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 image save --daemon kicbase/echo-server:functional-297998 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-297998
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 service list -o json
functional_test.go:1494: Took "303.713389ms" to run "out/minikube-linux-amd64 -p functional-297998 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.40:32417
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdspecific-port1276675323/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.775161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:15:36.477588   18814 retry.go:31] will retry after 440.470214ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdspecific-port1276675323/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh "sudo umount -f /mount-9p": exit status 1 (253.346303ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-297998 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdspecific-port1276675323/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.40:32417
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup460661923/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup460661923/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup460661923/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T" /mount1: exit status 1 (310.013465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:15:38.289567   18814 retry.go:31] will retry after 262.615208ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-297998 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-297998 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup460661923/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup460661923/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-297998 /tmp/TestFunctionalparallelMountCmdVerifyCleanup460661923/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-297998
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-297998
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-297998
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-610874 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1011 21:17:06.382611   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:17:34.092514   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-610874 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.976838927s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-610874 -- rollout status deployment/busybox: (5.35631504s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-4sstr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-pwg8s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-wdkxg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-4sstr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-pwg8s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-wdkxg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-4sstr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-pwg8s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-wdkxg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-4sstr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-4sstr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-pwg8s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-pwg8s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-wdkxg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-610874 -- exec busybox-7dff88458-wdkxg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-610874 -v=7 --alsologtostderr
E1011 21:20:24.492146   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:24.498556   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:24.509898   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:24.531255   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:24.572604   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:24.654034   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:24.815771   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:25.137964   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:25.779541   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:27.061788   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:29.623339   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:34.744930   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:44.986282   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-610874 -v=7 --alsologtostderr: (57.104356244s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-610874 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp testdata/cp-test.txt ha-610874:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874:/home/docker/cp-test.txt ha-610874-m02:/home/docker/cp-test_ha-610874_ha-610874-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test_ha-610874_ha-610874-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874:/home/docker/cp-test.txt ha-610874-m03:/home/docker/cp-test_ha-610874_ha-610874-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test_ha-610874_ha-610874-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874:/home/docker/cp-test.txt ha-610874-m04:/home/docker/cp-test_ha-610874_ha-610874-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test_ha-610874_ha-610874-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp testdata/cp-test.txt ha-610874-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m02:/home/docker/cp-test.txt ha-610874:/home/docker/cp-test_ha-610874-m02_ha-610874.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test_ha-610874-m02_ha-610874.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m02:/home/docker/cp-test.txt ha-610874-m03:/home/docker/cp-test_ha-610874-m02_ha-610874-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test_ha-610874-m02_ha-610874-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m02:/home/docker/cp-test.txt ha-610874-m04:/home/docker/cp-test_ha-610874-m02_ha-610874-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test_ha-610874-m02_ha-610874-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp testdata/cp-test.txt ha-610874-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt ha-610874:/home/docker/cp-test_ha-610874-m03_ha-610874.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test_ha-610874-m03_ha-610874.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt ha-610874-m02:/home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test_ha-610874-m03_ha-610874-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m03:/home/docker/cp-test.txt ha-610874-m04:/home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test_ha-610874-m03_ha-610874-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp testdata/cp-test.txt ha-610874-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266150739/001/cp-test_ha-610874-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt ha-610874:/home/docker/cp-test_ha-610874-m04_ha-610874.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874 "sudo cat /home/docker/cp-test_ha-610874-m04_ha-610874.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt ha-610874-m02:/home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m02 "sudo cat /home/docker/cp-test_ha-610874-m04_ha-610874-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 cp ha-610874-m04:/home/docker/cp-test.txt ha-610874-m03:/home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 ssh -n ha-610874-m03 "sudo cat /home/docker/cp-test_ha-610874-m04_ha-610874-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-610874 node delete m03 -v=7 --alsologtostderr: (15.953259018s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (293.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-610874 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1011 21:35:24.492258   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:36:47.555494   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:37:06.382926   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-610874 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m52.509805776s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (293.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-610874 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-610874 --control-plane -v=7 --alsologtostderr: (1m23.874084206s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-610874 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-194526 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-194526 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.124701078s)
--- PASS: TestJSONOutput/start/Command (56.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-194526 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-194526 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-194526 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-194526 --output=json --user=testUser: (7.337492692s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-852239 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-852239 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.217432ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fba73f77-1b4e-4ee9-9a5f-6657908ac2ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-852239] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2060c27e-d1a0-4522-8879-0f7567afccf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"be519b5e-84e0-4f0e-9b2d-0357252d7510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a51aad87-179e-43d3-abce-e84c24e403af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig"}}
	{"specversion":"1.0","id":"5310107f-71bc-4abf-82bb-eb2cb9eeaaf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube"}}
	{"specversion":"1.0","id":"51d22ccf-941c-49c7-a2e0-137b54f13e1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e482dc77-ac3d-4902-be52-f5d4e4f559f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1ac1f935-14d3-46ba-81ea-d75cc92acd45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-852239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-852239
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-978774 --driver=kvm2  --container-runtime=crio
E1011 21:40:24.494814   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-978774 --driver=kvm2  --container-runtime=crio: (43.100196046s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-988443 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-988443 --driver=kvm2  --container-runtime=crio: (43.480894146s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-978774
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-988443
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-988443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-988443
helpers_test.go:175: Cleaning up "first-978774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-978774
--- PASS: TestMinikubeProfile (89.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-705425 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-705425 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.74986332s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-705425 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-705425 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-717507 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1011 21:42:06.383503   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-717507 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.270673706s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-717507 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-717507 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-705425 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-717507 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-717507 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-717507
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-717507: (1.268563008s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-717507
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-717507: (22.587275775s)
--- PASS: TestMountStart/serial/RestartStopped (23.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-717507 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-717507 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-805849 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-805849 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.434752617s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-805849 -- rollout status deployment/busybox: (4.756873956s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-62n7d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-w9d5j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-62n7d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-w9d5j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-62n7d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-w9d5j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-62n7d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-62n7d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-w9d5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-805849 -- exec busybox-7dff88458-w9d5j -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-805849 -v 3 --alsologtostderr
E1011 21:45:09.457549   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:45:24.492273   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-805849 -v 3 --alsologtostderr: (49.739109818s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-805849 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp testdata/cp-test.txt multinode-805849:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3100717761/001/cp-test_multinode-805849.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849:/home/docker/cp-test.txt multinode-805849-m02:/home/docker/cp-test_multinode-805849_multinode-805849-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m02 "sudo cat /home/docker/cp-test_multinode-805849_multinode-805849-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849:/home/docker/cp-test.txt multinode-805849-m03:/home/docker/cp-test_multinode-805849_multinode-805849-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m03 "sudo cat /home/docker/cp-test_multinode-805849_multinode-805849-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp testdata/cp-test.txt multinode-805849-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3100717761/001/cp-test_multinode-805849-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt multinode-805849:/home/docker/cp-test_multinode-805849-m02_multinode-805849.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849 "sudo cat /home/docker/cp-test_multinode-805849-m02_multinode-805849.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849-m02:/home/docker/cp-test.txt multinode-805849-m03:/home/docker/cp-test_multinode-805849-m02_multinode-805849-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m03 "sudo cat /home/docker/cp-test_multinode-805849-m02_multinode-805849-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp testdata/cp-test.txt multinode-805849-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3100717761/001/cp-test_multinode-805849-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt multinode-805849:/home/docker/cp-test_multinode-805849-m03_multinode-805849.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849 "sudo cat /home/docker/cp-test_multinode-805849-m03_multinode-805849.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 cp multinode-805849-m03:/home/docker/cp-test.txt multinode-805849-m02:/home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 ssh -n multinode-805849-m02 "sudo cat /home/docker/cp-test_multinode-805849-m03_multinode-805849-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 node stop m03: (1.481679592s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-805849 status: exit status 7 (414.021823ms)

                                                
                                                
-- stdout --
	multinode-805849
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-805849-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-805849-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr: exit status 7 (415.013692ms)

                                                
                                                
-- stdout --
	multinode-805849
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-805849-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-805849-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:45:47.946258   46550 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:45:47.946342   46550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:45:47.946346   46550 out.go:358] Setting ErrFile to fd 2...
	I1011 21:45:47.946351   46550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:45:47.946538   46550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 21:45:47.946708   46550 out.go:352] Setting JSON to false
	I1011 21:45:47.946734   46550 mustload.go:65] Loading cluster: multinode-805849
	I1011 21:45:47.947219   46550 notify.go:220] Checking for updates...
	I1011 21:45:47.947812   46550 config.go:182] Loaded profile config "multinode-805849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 21:45:47.947854   46550 status.go:174] checking status of multinode-805849 ...
	I1011 21:45:47.948677   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:47.948727   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:47.964328   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41357
	I1011 21:45:47.964847   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:47.965475   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:47.965512   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:47.965902   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:47.966074   46550 main.go:141] libmachine: (multinode-805849) Calling .GetState
	I1011 21:45:47.967968   46550 status.go:371] multinode-805849 host status = "Running" (err=<nil>)
	I1011 21:45:47.967983   46550 host.go:66] Checking if "multinode-805849" exists ...
	I1011 21:45:47.968304   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:47.968338   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:47.984013   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I1011 21:45:47.984502   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:47.985013   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:47.985038   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:47.985462   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:47.985662   46550 main.go:141] libmachine: (multinode-805849) Calling .GetIP
	I1011 21:45:47.988876   46550 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:45:47.989350   46550 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:45:47.989379   46550 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:45:47.989708   46550 host.go:66] Checking if "multinode-805849" exists ...
	I1011 21:45:47.990096   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:47.990149   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:48.004835   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I1011 21:45:48.005263   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:48.005752   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:48.005778   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:48.006085   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:48.006273   46550 main.go:141] libmachine: (multinode-805849) Calling .DriverName
	I1011 21:45:48.006462   46550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:45:48.006490   46550 main.go:141] libmachine: (multinode-805849) Calling .GetSSHHostname
	I1011 21:45:48.009600   46550 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:45:48.010179   46550 main.go:141] libmachine: (multinode-805849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:35:e7", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:43:02 +0000 UTC Type:0 Mac:52:54:00:2b:35:e7 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-805849 Clientid:01:52:54:00:2b:35:e7}
	I1011 21:45:48.010277   46550 main.go:141] libmachine: (multinode-805849) DBG | domain multinode-805849 has defined IP address 192.168.39.81 and MAC address 52:54:00:2b:35:e7 in network mk-multinode-805849
	I1011 21:45:48.010362   46550 main.go:141] libmachine: (multinode-805849) Calling .GetSSHPort
	I1011 21:45:48.010520   46550 main.go:141] libmachine: (multinode-805849) Calling .GetSSHKeyPath
	I1011 21:45:48.010673   46550 main.go:141] libmachine: (multinode-805849) Calling .GetSSHUsername
	I1011 21:45:48.010877   46550 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849/id_rsa Username:docker}
	I1011 21:45:48.095620   46550 ssh_runner.go:195] Run: systemctl --version
	I1011 21:45:48.101756   46550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:45:48.116485   46550 kubeconfig.go:125] found "multinode-805849" server: "https://192.168.39.81:8443"
	I1011 21:45:48.116520   46550 api_server.go:166] Checking apiserver status ...
	I1011 21:45:48.116553   46550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:45:48.130447   46550 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1131/cgroup
	W1011 21:45:48.140732   46550 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1131/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1011 21:45:48.140787   46550 ssh_runner.go:195] Run: ls
	I1011 21:45:48.145876   46550 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1011 21:45:48.150960   46550 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I1011 21:45:48.150978   46550 status.go:463] multinode-805849 apiserver status = Running (err=<nil>)
	I1011 21:45:48.150986   46550 status.go:176] multinode-805849 status: &{Name:multinode-805849 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:45:48.151000   46550 status.go:174] checking status of multinode-805849-m02 ...
	I1011 21:45:48.151265   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:48.151300   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:48.166922   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I1011 21:45:48.167471   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:48.167926   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:48.167951   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:48.168220   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:48.168353   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .GetState
	I1011 21:45:48.170094   46550 status.go:371] multinode-805849-m02 host status = "Running" (err=<nil>)
	I1011 21:45:48.170127   46550 host.go:66] Checking if "multinode-805849-m02" exists ...
	I1011 21:45:48.170404   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:48.170440   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:48.185035   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I1011 21:45:48.185418   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:48.185871   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:48.185894   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:48.186218   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:48.186410   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .GetIP
	I1011 21:45:48.188980   46550 main.go:141] libmachine: (multinode-805849-m02) DBG | domain multinode-805849-m02 has defined MAC address 52:54:00:3b:0e:f1 in network mk-multinode-805849
	I1011 21:45:48.189363   46550 main.go:141] libmachine: (multinode-805849-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:0e:f1", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:44:03 +0000 UTC Type:0 Mac:52:54:00:3b:0e:f1 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-805849-m02 Clientid:01:52:54:00:3b:0e:f1}
	I1011 21:45:48.189394   46550 main.go:141] libmachine: (multinode-805849-m02) DBG | domain multinode-805849-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:3b:0e:f1 in network mk-multinode-805849
	I1011 21:45:48.189537   46550 host.go:66] Checking if "multinode-805849-m02" exists ...
	I1011 21:45:48.189823   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:48.189855   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:48.204276   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I1011 21:45:48.204595   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:48.204973   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:48.204990   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:48.205323   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:48.205482   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .DriverName
	I1011 21:45:48.205633   46550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:45:48.205652   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .GetSSHHostname
	I1011 21:45:48.208354   46550 main.go:141] libmachine: (multinode-805849-m02) DBG | domain multinode-805849-m02 has defined MAC address 52:54:00:3b:0e:f1 in network mk-multinode-805849
	I1011 21:45:48.208719   46550 main.go:141] libmachine: (multinode-805849-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:0e:f1", ip: ""} in network mk-multinode-805849: {Iface:virbr1 ExpiryTime:2024-10-11 22:44:03 +0000 UTC Type:0 Mac:52:54:00:3b:0e:f1 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-805849-m02 Clientid:01:52:54:00:3b:0e:f1}
	I1011 21:45:48.208757   46550 main.go:141] libmachine: (multinode-805849-m02) DBG | domain multinode-805849-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:3b:0e:f1 in network mk-multinode-805849
	I1011 21:45:48.208886   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .GetSSHPort
	I1011 21:45:48.209049   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .GetSSHKeyPath
	I1011 21:45:48.209201   46550 main.go:141] libmachine: (multinode-805849-m02) Calling .GetSSHUsername
	I1011 21:45:48.209340   46550 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19749-11611/.minikube/machines/multinode-805849-m02/id_rsa Username:docker}
	I1011 21:45:48.285845   46550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:45:48.299391   46550 status.go:176] multinode-805849-m02 status: &{Name:multinode-805849-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:45:48.299436   46550 status.go:174] checking status of multinode-805849-m03 ...
	I1011 21:45:48.299755   46550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1011 21:45:48.299798   46550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1011 21:45:48.314895   46550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I1011 21:45:48.315332   46550 main.go:141] libmachine: () Calling .GetVersion
	I1011 21:45:48.315789   46550 main.go:141] libmachine: Using API Version  1
	I1011 21:45:48.315804   46550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1011 21:45:48.316049   46550 main.go:141] libmachine: () Calling .GetMachineName
	I1011 21:45:48.316254   46550 main.go:141] libmachine: (multinode-805849-m03) Calling .GetState
	I1011 21:45:48.317746   46550 status.go:371] multinode-805849-m03 host status = "Stopped" (err=<nil>)
	I1011 21:45:48.317759   46550 status.go:384] host is not running, skipping remaining checks
	I1011 21:45:48.317763   46550 status.go:176] multinode-805849-m03 status: &{Name:multinode-805849-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 node start m03 -v=7 --alsologtostderr: (40.00744665s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-805849 node delete m03: (1.557531782s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-805849 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1011 21:55:24.495574   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:06.383240   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-805849 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m2.189953665s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-805849 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-805849
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-805849-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-805849-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.842211ms)

                                                
                                                
-- stdout --
	* [multinode-805849-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-805849-m02' is duplicated with machine name 'multinode-805849-m02' in profile 'multinode-805849'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-805849-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-805849-m03 --driver=kvm2  --container-runtime=crio: (45.376978026s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-805849
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-805849: exit status 80 (211.473447ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-805849 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-805849-m03 already exists in multinode-805849-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-805849-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.45s)

                                                
                                    
x
+
TestScheduledStopUnix (112.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-838579 --memory=2048 --driver=kvm2  --container-runtime=crio
E1011 22:01:49.461930   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-838579 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.162721208s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838579 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-838579 -n scheduled-stop-838579
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838579 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1011 22:01:52.741685   18814 retry.go:31] will retry after 61.561µs: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.742844   18814 retry.go:31] will retry after 150.833µs: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.744000   18814 retry.go:31] will retry after 151.824µs: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.745123   18814 retry.go:31] will retry after 449.841µs: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.746257   18814 retry.go:31] will retry after 273.07µs: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.747380   18814 retry.go:31] will retry after 1.010076ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.748512   18814 retry.go:31] will retry after 778.492µs: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.749646   18814 retry.go:31] will retry after 1.497776ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.751831   18814 retry.go:31] will retry after 3.168714ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.756044   18814 retry.go:31] will retry after 3.754146ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.760529   18814 retry.go:31] will retry after 7.882736ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.769126   18814 retry.go:31] will retry after 10.95864ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.780346   18814 retry.go:31] will retry after 14.754807ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.795598   18814 retry.go:31] will retry after 10.416954ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
I1011 22:01:52.806858   18814 retry.go:31] will retry after 28.331514ms: open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/scheduled-stop-838579/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838579 --cancel-scheduled
E1011 22:02:06.382464   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-838579 -n scheduled-stop-838579
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-838579
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-838579 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-838579
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-838579: exit status 7 (63.070148ms)

                                                
                                                
-- stdout --
	scheduled-stop-838579
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-838579 -n scheduled-stop-838579
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-838579 -n scheduled-stop-838579: exit status 7 (61.202357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-838579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-838579
--- PASS: TestScheduledStopUnix (112.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2914628354 start -p running-upgrade-604134 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2914628354 start -p running-upgrade-604134 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m25.71388858s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-604134 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-604134 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m44.275311586s)
helpers_test.go:175: Cleaning up "running-upgrade-604134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-604134
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-604134: (1.152208329s)
--- PASS: TestRunningBinaryUpgrade (194.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-320768 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-320768 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.199842ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-320768] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-320768 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-320768 --driver=kvm2  --container-runtime=crio: (1m37.014013611s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-320768 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-579309 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-579309 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.213266ms)

                                                
                                                
-- stdout --
	* [false-579309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 22:03:06.889436   54642 out.go:345] Setting OutFile to fd 1 ...
	I1011 22:03:06.889552   54642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:03:06.889563   54642 out.go:358] Setting ErrFile to fd 2...
	I1011 22:03:06.889570   54642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 22:03:06.889829   54642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-11611/.minikube/bin
	I1011 22:03:06.890586   54642 out.go:352] Setting JSON to false
	I1011 22:03:06.891839   54642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6332,"bootTime":1728677855,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1011 22:03:06.891907   54642 start.go:139] virtualization: kvm guest
	I1011 22:03:06.894230   54642 out.go:177] * [false-579309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1011 22:03:06.895483   54642 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 22:03:06.895495   54642 notify.go:220] Checking for updates...
	I1011 22:03:06.897901   54642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 22:03:06.899018   54642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-11611/kubeconfig
	I1011 22:03:06.900176   54642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-11611/.minikube
	I1011 22:03:06.901330   54642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1011 22:03:06.902456   54642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 22:03:06.904244   54642 config.go:182] Loaded profile config "NoKubernetes-320768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:03:06.904383   54642 config.go:182] Loaded profile config "force-systemd-env-326657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:03:06.904525   54642 config.go:182] Loaded profile config "offline-crio-313531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1011 22:03:06.904619   54642 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 22:03:06.942275   54642 out.go:177] * Using the kvm2 driver based on user configuration
	I1011 22:03:06.943743   54642 start.go:297] selected driver: kvm2
	I1011 22:03:06.943761   54642 start.go:901] validating driver "kvm2" against <nil>
	I1011 22:03:06.943772   54642 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 22:03:06.945684   54642 out.go:201] 
	W1011 22:03:06.946899   54642 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1011 22:03:06.948048   54642 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-579309 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-579309" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-579309

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-579309"

                                                
                                                
----------------------- debugLogs end: false-579309 [took: 2.674176444s] --------------------------------
helpers_test.go:175: Cleaning up "false-579309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-579309
--- PASS: TestNetworkPlugins/group/false (2.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1046038011 start -p stopped-upgrade-173320 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1046038011 start -p stopped-upgrade-173320 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m46.008315172s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1046038011 -p stopped-upgrade-173320 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1046038011 -p stopped-upgrade-173320 stop: (2.125966714s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-173320 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-173320 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.008472816s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (60.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-320768 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1011 22:05:24.492098   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-320768 --no-kubernetes --driver=kvm2  --container-runtime=crio: (58.420637741s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-320768 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-320768 status -o json: exit status 2 (260.816857ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-320768","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-320768
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-320768: (2.179378798s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (60.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-320768 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-320768 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.244345723s)
--- PASS: TestNoKubernetes/serial/Start (44.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-173320
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-320768 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-320768 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.36211ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.137060843s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.96s)

                                                
                                    
x
+
TestPause/serial/Start (60.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-318346 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-318346 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m0.499362343s)
--- PASS: TestPause/serial/Start (60.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-320768
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-320768: (1.774098622s)
--- PASS: TestNoKubernetes/serial/Stop (1.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (49.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-320768 --driver=kvm2  --container-runtime=crio
E1011 22:07:06.382863   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-320768 --driver=kvm2  --container-runtime=crio: (49.224098103s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (49.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-320768 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-320768 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.988555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (139.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-318346 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-318346 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m19.881197588s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (139.90s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-318346 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-318346 --alsologtostderr -v=5: (1.100519526s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-318346 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-318346 --output=json --layout=cluster: exit status 2 (285.306792ms)

                                                
                                                
-- stdout --
	{"Name":"pause-318346","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-318346","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.32s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-318346 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-318346 --alsologtostderr -v=5: (1.315227734s)
--- PASS: TestPause/serial/Unpause (1.32s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-318346 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-318346 --alsologtostderr -v=5: (1.018111836s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-318346 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.674207668s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1011 22:10:07.559385   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:10:24.493918   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m53.798621407s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m12.022145225s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m18.669785985s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-579309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9mdvg" [6a055890-384d-48a8-8f64-b28eb5f27f2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9mdvg" [6a055890-384d-48a8-8f64-b28eb5f27f2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009169007s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m35.842104s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mnvf9" [fc77660d-e29f-4bc1-b64d-971a0ec84022] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003512284s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-579309 "pgrep -a kubelet"
I1011 22:12:03.831303   18814 config.go:182] Loaded profile config "kindnet-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6rdw5" [a5f6b5fe-aa89-413d-a2fe-fa23f1d9ac76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:12:06.382931   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6rdw5" [a5f6b5fe-aa89-413d-a2fe-fa23f1d9ac76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006120607s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (105.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m45.20858742s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (105.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m49.845499579s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5hg8s" [af7bc00c-d7a8-4f54-ad20-0bc62151e167] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021482245s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-579309 "pgrep -a kubelet"
I1011 22:13:15.673356   18814 config.go:182] Loaded profile config "calico-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-579309 replace --force -f testdata/netcat-deployment.yaml: (1.395040981s)
I1011 22:13:17.071527   18814 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1011 22:13:17.131250   18814 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gwlbk" [47547f98-845d-411b-99d2-64408866c349] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gwlbk" [47547f98-845d-411b-99d2-64408866c349] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003850519s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-579309 "pgrep -a kubelet"
I1011 22:13:32.799989   18814 config.go:182] Loaded profile config "custom-flannel-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t87nm" [ec94798b-a1cc-4afa-8fca-5117afb49251] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t87nm" [ec94798b-a1cc-4afa-8fca-5117afb49251] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.309405315s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-579309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m23.358711836s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-579309 "pgrep -a kubelet"
I1011 22:14:08.509749   18814 config.go:182] Loaded profile config "enable-default-cni-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zgdvg" [2ca6e065-b795-4ad2-916d-587f49788693] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zgdvg" [2ca6e065-b795-4ad2-916d-587f49788693] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003925387s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bqfn8" [bf43a31c-7008-4b17-a367-414e02fe7e37] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005651114s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-579309 "pgrep -a kubelet"
I1011 22:14:29.323487   18814 config.go:182] Loaded profile config "flannel-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2bwdn" [0207c35d-5e6c-4a67-9869-5ebef34618e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2bwdn" [0207c35d-5e6c-4a67-9869-5ebef34618e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004411459s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-390487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-390487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m48.935785479s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-223942 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-223942 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m36.204935407s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-579309 "pgrep -a kubelet"
I1011 22:15:11.179441   18814 config.go:182] Loaded profile config "bridge-579309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-579309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8s65n" [9919cd9a-0dba-41e9-9688-d8c04f799753] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8s65n" [9919cd9a-0dba-41e9-9688-d8c04f799753] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003583981s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-579309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-579309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-070708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-070708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m21.368057353s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-390487 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [04bdf3fd-715e-4679-8078-4c997f458e26] Pending
helpers_test.go:344: "busybox" [04bdf3fd-715e-4679-8078-4c997f458e26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [04bdf3fd-715e-4679-8078-4c997f458e26] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004800701s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-390487 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-223942 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a37d1ca5-f8a3-4d31-8064-d2c71f79b9cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a37d1ca5-f8a3-4d31-8064-d2c71f79b9cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003856669s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-223942 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-390487 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-390487 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-223942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-223942 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5fb5a640-b88e-48e1-b496-dc8b833c6685] Pending
E1011 22:17:01.084413   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5fb5a640-b88e-48e1-b496-dc8b833c6685] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1011 22:17:02.746362   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5fb5a640-b88e-48e1-b496-dc8b833c6685] Running
E1011 22:17:06.205880   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/auto-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:17:06.382677   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/addons-335640/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:17:07.868273   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/kindnet-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003822885s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-070708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-070708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (682.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-390487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-390487 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m22.495992926s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-390487 -n no-preload-390487
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (682.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (601s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-223942 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1011 22:19:18.975379   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.067599   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.073936   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.085241   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.106552   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.147921   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.229376   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.390898   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:23.712599   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:24.354627   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:25.636869   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:28.198304   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:29.217480   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-223942 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m0.753884964s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223942 -n embed-certs-223942
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (601.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (598.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-070708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1011 22:19:43.561332   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:49.699006   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:19:54.966907   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/custom-flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:04.043046   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.427112   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.433503   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.444833   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.466158   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.507529   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.588963   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:11.751244   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:12.072954   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:12.714992   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:13.997115   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:16.558589   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:21.680530   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:24.492983   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:30.661258   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:20:31.923025   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-070708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m57.879815042s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-070708 -n default-k8s-diff-port-070708
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (598.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-323416 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-323416 --alsologtostderr -v=3: (1.286604636s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-323416 -n old-k8s-version-323416: exit status 7 (61.641858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-323416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-555648 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1011 22:44:08.721308   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/enable-default-cni-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:44:23.068145   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/flannel-579309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-555648 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (51.977738116s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-555648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-555648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066118316s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-555648 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-555648 --alsologtostderr -v=3: (10.667621413s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-555648 -n newest-cni-555648
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-555648 -n newest-cni-555648: exit status 7 (66.481586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-555648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-555648 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1011 22:45:11.426790   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/bridge-579309/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:45:24.492043   18814 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-11611/.minikube/profiles/functional-297998/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-555648 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.559248179s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-555648 -n newest-cni-555648
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-555648 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-555648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-555648 -n newest-cni-555648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-555648 -n newest-cni-555648: exit status 2 (229.619583ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-555648 -n newest-cni-555648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-555648 -n newest-cni-555648: exit status 2 (226.472118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-555648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-555648 -n newest-cni-555648
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-555648 -n newest-cni-555648
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    

Test skip (38/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
38 TestAddons/parallel/Olm 0
45 TestAddons/parallel/AmdGpuDevicePlugin 0
49 TestDockerFlags 0
52 TestDockerEnvContainerd 0
54 TestHyperKitDriverInstallOrUpdate 0
55 TestHyperkitDriverSkipUpgrade 0
106 TestFunctional/parallel/DockerEnv 0
107 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
155 TestGvisorAddon 0
177 TestImageBuild 0
204 TestKicCustomNetwork 0
205 TestKicExistingNetwork 0
206 TestKicCustomSubnet 0
207 TestKicStaticIP 0
239 TestChangeNoneUser 0
242 TestScheduledStopWindows 0
244 TestSkaffold 0
246 TestInsufficientStorage 0
250 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 2.87
264 TestNetworkPlugins/group/cilium 3.12
277 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-335640 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-579309 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-579309" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-579309

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-579309"

                                                
                                                
----------------------- debugLogs end: kubenet-579309 [took: 2.735249121s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-579309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-579309
--- SKIP: TestNetworkPlugins/group/kubenet (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-579309 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-579309" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-579309

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-579309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579309"

                                                
                                                
----------------------- debugLogs end: cilium-579309 [took: 2.987551586s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-579309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-579309
--- SKIP: TestNetworkPlugins/group/cilium (3.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-590493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-590493
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard